...
I recorded this video,(link at bottom), showing a non-optimal or not recommended lighting scenerio to show mechanical lighting only with the recording/tracking process and
reduced the gaim and exposure cam settings on purpose to produce a more noisy/darker tracking video using 80% JPG compression setting and the program still tracked well.
As my camera placement is fixed and never get moved, this recording was tracked with a 5 day old Calibration file and was still accurate, when it's important I always do
recalibrate before a recording session.
The lighting seen is just 5 - 40w fluorescent screw in bulbs filtered through a white filter paper for ambient light, not directly pointed on actor, you can see behind the
iPi actor how dark the performer is.
I tried to show a variety of range of motions and show where the "subtle" bouncing on the squat was eliminated do to the JR and TF filters, but wasn't really necessary to the
overall animation result anyway.(could be re-added easily if I wanted to).
Second part of the video shows the uncleaned BVH imported to Biped and the resulting limited clean up on the test character animation which didn't take long at all, about 30
mins. and most of that was pin pointing the areas where keys could be reduced, some hand bone adjustments were made, not much though, where some body penetration occurred on
the bow. Of course I could refine it more, but you want better more natural animation right from export with less work, I don't know what your expecting from a motion
capturing program in this systems price range.
I don't use a high end machine or components, but the better the video card used, the better the resulting tracking and faster processing time.(mine is actually med.end card
750Ti with an i5 processor and processes fine for now).
I use only 5 cameras and only 3 capture the full actor in a triangular postion, the other 2 point only to the hips/lower back when actor is centered and are set side centered
and closer to the actor and point more downward to track the hips and feet more accurately and works well for floor contact actions also. (this does change the way I have to
perform my camera calibration, but I know how and get very good calibration easily).
My capture volume is 3m x 3m approx., any larger and I would need more cameras to catch the hips accurately as the actor moved farther forward or backward, but I can do a lot
in a 3m x 3m space, in a 7m x 4m room, and it's what I have to work with now.
I also always use the very flex spine, head tracking on, shoulders by video for best results when doing human actions, just depends on the actions if I change that and how I
use JR and TR and Refining during processing. This was done with JR at 1 on all parts again and 1 on TF again, Refine Forward wasn't used and exported as my preferrence at 5
TF for 3ds Max.
Really if you aren't getting similar results, under a better recording scenerio, I don't know what to say except try something different than what's being done now.
(Note: There is a glitch when applying the Move data, in order to stop the jittery effect on export, I must apply the data twice (back to back) on each controller and always
be the last thing I run before exporting, or I get weird hand glitching, it only usually takes a minute, so no big deal, but if the Move data is not accurate, and sometimes
is, it has to be corrected in a 3d editor afterwards).
Link:
https://youtu.be/n0_SqJQXWwI ... View in HD
...