...
As to the alternatives G, they are kind of expensive, some in the range of a new Kinect v2, some even more pricey brand new, used if can find them would be cheaper of course.
You can look at the comparison videos of each on YouTube and correct, none match the point cloud of the Kinect v2, which is also best recommended by iPi.
To the jittery hand issue of previous poster, the PS Eye full processing path must include the running of the "Refine" process, either forward, or backward for the entire ROI you are tracking and the Move data must be applied always as the final step, before playback review, or export of the animation.
This process should also be run even with Kinect v2 tracking for best results, but many leave it out, as it takes extra time to run it, though it is much faster with Kinects.
PS Eyes are more of a Prosumer set up, things should be much more tightly respected, as far as the room, lighting, clothing, space, colors of all corresponding parameters, just much more consistently followed, not just a plug-n-play set up as easy as Kinects, and even if all parameters are followed the calibration must be very good and the iPi Actor set up correctly, but once this is all realized it is an excellent solution for mocap, and I would put my animations up against any other budget solution available now, though it isn't a real-time solution, but in my personal experience far less clean up in the exported animations than most real-time solutions, especially now with v4, for standard sized human figures anyway, it gets a bit tricky to act as if you were a non-standard character, but not impossible, if you know in advance what character you are performing for.
Of course not all users want to put in the time to get good results, they would rather get an instant recorded export and spend hours upon hours cleaning it up, more power to them, but I handle 90% of this right inside iPi Studio very quickly before export, especially since the recent updates to fix the upper arm twist issue, but of course you have to know what to look for and how to fix it to do this efficiently, and I at least know my motions are very close to what was performed, because I am tracking and can see the actual video reference while running it through.
Until you really satisfy all the necessary parameters to run a PS Eye set up efficiently, you are just going to be disappointed in the results, that is why I believe you would be better off with dual Kinect v2, even though it takes a compatible 2nd laptop to run a second sensor, but even that set up has its limitations.
If your calibration is well done, you should really only need to align the iPi Actor in one camera, usually camera 1, or the best viewing camera anyway, then on refit it will snap into proper position in all cameras, if it doesn't, then the calibration is off, even if you got the notorious "Perfect Green" showing.
It is very hard for someone else to explain, so it is really just a trial by error until you figure it out to get the best tracking results you can with a PS Eyes set up and some users just have better luck than others.
With the new tracking algorithm, it is much more accurate and full tracking loss should be at a minimum with PS Eyes, so if it isn't, there is something not quite right that you will need to track down and fix, that's just how it is, or you can use another solution for mocap and deal with a whole different set of specifications for its performance.
Just starting out with this program, or any other, you need to learn the curves for better results, many have been explained throughout this forum and can be searched by issue with the search function in the top ribbon.
Not much more can be said, especially for the area you are trying to use, but if you stick with it and follow the correct requirements it does work well, in my sole opinion of course.
...
|