...
Only you can answer what program would work best for your particular needs and how much you want to invest in a system, or have the room to set it up.
There is really only 4 to 5 viable systems available, Marker based, markerless based, Inertial suits, Kinect and Lightbox methods.
Marker based is higher accuracy, higher fps, real-time capturing at a much higher investment cost, additional conversion hardware required and more complex set up/recording methods and best set up in a larger capture area, plug-ins available for real-time use within 3D packages, or development engines.
2 types of RGB markerless systems that are even considerable, iPiSoft-Lower investment non real-time RGB camera/additional hardware, or Kinect sensor, or Organic Motion-Higher investment, real-time RGB camera and additional conversion hardware required, both of these have very good accuracy with proper set ups.
Kinect sensors used by various developers-Lower investment/lower fps, some real-time with much less accuracy, some not real-time such as iPi with higher accuracy, (iPi has released a "Live" single Kinect capture in V4, not sure of it's quality), but iPi is the only system that can incorporate viable use of multiple sensors at this time, though not real-time capturing, smaller limited capture volume use only, research their Docs/website/this forum, for more information.
Inertial Motion Sensor suits, or strap mounted-Wide range of investment, but also large range of quality of raw animation, higher fps, no cameras/lighting needed, real-time, larger capture volume coverage, not as accurate for cheaper systems, plug-ins available for real-time use within 3D packages, or development engines.
Lightbox systems-Ikinema Orion-Mid to high investment, real-time using tracking pucks, and/or hand controllers/VR HMD, small to larger capture volumes depending on kit purchased, mid quality tracking results, but can link with Ikinema Live (additional cost software), for real-time 3D package development use, and other more expensive real-time systems can link with this software also.
All systems can be viewed in action on YouTube videos, or the corresponding websites, but look for the RAW data capture results, many of these systems over hype their performance, or try to fool viewers with highly pre-cleaned post recordings as real-time and they aren't.
If you are looking for a lower investment system, with very good accuracy iPi software with either PS Eyes, or Kinect v2 sensors is a viable choice to get started with, but PS Eye RGB camera set ups require more room and better lighting to work properly, see iPi Docs/website/this forum, for more information.
I personally like my results from a 6 camera PS Eye set up @ 60 fps which Kinect sensor can not handle @ 30 fps for faster, more extreme, or more complex motions, even with dual, or multiple sensor set up, but 60 fps works better for continuous motion recording, subtle, or minimal motion recording can be affected adversely, but still not bad results after running all the iPi integral auto-clean tools properly.
For ease of set up and use though, dual Kinect v2 would be a good choice, especially for limited room and no lighting required, but dual Kinect v2 senors require 2 separate computers that will handle a Kv2 connection, due to the MicroSoft SDK limitations for use, so use of the Distributed Recording feature from iPi is required.
Hope this information will help some, but best to do your own research on which system will suit your needs and wants and area for capturing best.
...
|