Hello all!
We started with a single Kinect v2 setup that worked correctly for slow/regular moves, and wanted to try the 6 PS Eyes setup to better handle full body rotations and longer walk.
Our t-shirts are black, we wear blue jeans and black shoes. The ground is grey concrete, we have no real walls as we work directly under the rooftop. Light's coming from heavy spotlights on the ceiling, there's no light from the outside as we occluded the windows. We placed 6 cameras in a circle of 3,5 meters radius; all 6 are placed at approximately 1,6m height.
Calibration worked perfectly, cameras heights found by iPi Motion Cpature were almost perfect. But when we tried to track a performance, nothing worked as well as with the Kinect. The 3d T-pose is placed with a 90 degrees offset to its real counterpart, and if we click on Refit, everything messes up. Generally, both arms are placed in the right arm and both legs in the right leg. If we manually reposition left arm and left leg and start tracking anyway, it generally starts correctly but goes wrong later and both arms/legs merge again. Sometimes it even goes worse and the whole body goes crazy.
If we then connect the Kinect again, and track the same scene with exactly the same light/room configuration, everything works perfectly again.
Documentation says the 6 PS Eyes are the best possible solution, but we can't do anything with them for the time being. Is there something we do wrong with it? Is this due to a bad calibration? But after calibration, all 6 virtual cameras seems to be correctly placed inside the 3D space. Or is it due to something else? How can a single Kinect camera give a better solution than 6 correctly calibrated cameras? One single depth sensor seems to give better results than 6 eyes... And even if we do something wrong, why does the software track correctly one side of the body (right arm and right leg) but places the other side incorrectly and merges it with the right side?
Thanks for your help!
|