...
Don't worry about the hand position while in track forward, it should refit close enough to the correct position for tracking, and yes, it will snap to straight with the forearm while tracking forward always, if you start in a T Pose, keep the palms down and the thumbs at a natural angle, slightly downward.
You can add the Move data at the start frame before you run the refine process, or you can do it after, I do it before just out of habit and it will retain the wrist rotations while refining.
You should be able to use the new algorithm for tracking without many issues, but again, this relies greatly on several other factors you may not be meeting fully, so some tracking loss may occur due to that.
95-100% GPU usage while processing is exactly what you want and with a 980 Ti you should be near 3 fps tracking and close to 3.5 fps during refining, using the new algorithm if using the low resolution speed, is this true? If using high res speed tracking, which you probably are at this point, you will loose about 1 fps, but you will loose much more fps speed when using the old algorithm, but if that's what it takes to get better results until you get the hang of the program a bit more, then it is what it is.
As I stated before, the use of PS Eye camera is much more strict in following the parameters needed and must be followed consistently, you should be aware that the iPi scene light is also needed to be placed properly, as iPi tracking uses directional lighting and this should also reflect in the lighting of your capture area of your room with enough light for all cameras to see the colors of the Actor clearly and a brighter light coming from a single direction onto the Actor, while still trying not to cast harsh self shadows, or floor shadows, this is why the use of more lower wattage light points around the actor is better than less high wattage bright lighting from fewer points.
I am not going to say it is easy for every user to accommodate the needs for PS Eyes for best results, but if you stick with it long enough and consistently record with the proper settings and set up, eventually you will get better results, but without following the needs for that system, it may frustrate you more than convince you that good results are possible.
Many don't like the tracking speeds with PS Eyes, as it is a longer time process, but personally I feel, since I get good tracking results, the time spent processing is less than spending hours cleaning up poor real-time mocap from other budget systems, since most of the clean up can be done easily and basically automatically right inside iPi Studio for the most part.
You should probably stick to shorter, easier motions at this point and not long drawn out performances until you get a better feel for the program using PS Eyes and Move controllers, then work your way up to faster and more complex actions and get yourself a consistent process worked out that works best for you, but there is a path of the processing that should be followed for best results and then stuck with.
First, work on getting the best calibration you can, this is critical to the best tracking results and once the video and cal files are opened in Studio, it is best to spin the floor grid square to the Actor first, before refitting the Actor, this will help when the IK Move/Rotate tool is used later.
Second, is to get your capture area and lighting much better and your camera positioning better, aligning them all to one single spot on the center of the floor, and best to use the view port grid to achieve this with the center point of the grid being at hip level, or as close as possible to that.
Third, is to wear the proper contrasting colored, tighter fitting clothing, even light material darker colored gloves and a darker head wrap will help greatly.
Forth, is to set up the iPi Actor scaling and skeleton properly, lowering the leg length to get the hips a bit lower to center of mass usually is required, and the sizing sliders for the overall body mass should be set low, then try to match the rest of the sliders accordingly to be slightly thinner than the actual performer for better results.
Fifth, run track forward with whatever processing works best for you, you may need to stop and re-start the tracking at some points, this is common, but shouldn't be excessive tracking losses in my opinion, or something else may be wrong.
Sixth, run the refine process, either forward, or backward, with or without applying the Moves data first, and if you see an area come up where the tracking was off a bit, you can stop and track forward back over those areas to help clean it up, BUT DO NOT HIT THE REFIT BUTTON WHEN DOING THIS, or it will erase any prior run frames that have had refine on them, just track forward, or backward, then re-start the Refine process a few frames prior, or after the spot you stopped to re-track, depending on which way you were running the Refining.
Once you get to this point, re-apply the Moves data at the ROI start frame for the take, (you must rotate the Moves on the Actor to the correct position first using the Move Tool), and if the hands aren't positioned closely to the video hands position, rotate them better and re-apply the Data.
Then you can replay the animation at this point, it will be a bit jittery yet, but you are just looking for any glitchy, or bad areas that may need more refining and take care of them in this phase.
Once all is to your satisfaction, you can run the Jitter Removal process, but best not to go above the default "2" settings per body part, except the head you may have to go higher to "5", especially if not using a head Move.
Finally, just let your animation playback in a loop several times and watch it for any areas that you might want to address, then scale the ROI only to encompass that area, so you don't mess up the entire ROI accidentally, because any changes, or re-tracking of those areas will need to have the entire process re-run, at least from the Refine portion forward, if not to include the re-tracking forward in the area, there is a few tricks to master when doing this, but you should be able to figure them out.
After all this done and you are happy, remembering the Moves data must always be the final step after any repairs and definitely before final exporting, whether you use finger gestures from inside iPi, or after in another 3D editing package.
In my experience with iPi, it is better to export your animations with the trajectory filter set high, 4-5, to pre-smooth the animations better, this is my personal preference, but is user choice, and not necessary if you want to smooth the animations in another way in post, depending on your 3D editing package, but it does help to pre-smooth it with iPi first.
I tried to lay out a basic processing path, so I hope it helps some, though no one can guarantee individual user results.
Good Luck and keep things posted for you progression.
...
|