Change font size
It is currently Tue Nov 20, 2018 11:29 am


Post a new topicPost a reply Page 1 of 1   [ 6 posts ]
Author Message
PostPosted: Mon Aug 20, 2018 5:59 pm 

Joined: Mon Aug 20, 2018 5:55 pm
Posts: 1
I'm an novice animator looking to implement 3d motion capture into my workflow, I'm currently undecided on which one would benefit me the most, a camera that has high fps or high resolution to work with this iPi software or other potential 3rd party software... If someone can enlighten me with this thanks! Overall what's the benefit of high fps and low resolution (below a mega pixel) or a 4k 30fps camera in utilizing with mocap? I understand that higher fps allows me to get faster action sequences that needs to be caught accurately but don't the low resolution make it hard to determine where are the pins on the body is at?

thanks!


Top
 Profile  
 
PostPosted: Wed Aug 22, 2018 6:46 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 871
Location: Florida USA
...

Only you can answer what program would work best for your particular needs and how much you want to invest in a system, or have the room to set it up.

There is really only 4 to 5 viable systems available, Marker based, markerless based, Inertial suits, Kinect and Lightbox methods.

Marker based is higher accuracy, higher fps, real-time capturing at a much higher investment cost, additional conversion hardware required and more complex set up/recording methods and best set up in a larger capture area, plug-ins available for real-time use within 3D packages, or development engines.

2 types of RGB markerless systems that are even considerable, iPiSoft-Lower investment non real-time RGB camera/additional hardware, or Kinect sensor, or
Organic Motion-Higher investment, real-time RGB camera and additional conversion hardware required, both of these have very good accuracy with proper set ups.

Kinect sensors used by various developers-Lower investment/lower fps, some real-time with much less accuracy, some not real-time such as iPi with higher accuracy, (iPi has released a "Live" single Kinect capture in V4, not sure of it's quality), but iPi is the only system that can incorporate viable use of multiple sensors at this time, though not real-time capturing, smaller limited capture volume use only, research their Docs/website/this forum, for more information.

Inertial Motion Sensor suits, or strap mounted-Wide range of investment, but also large range of quality of raw animation, higher fps, no cameras/lighting needed, real-time, larger capture volume coverage, not as accurate for cheaper systems, plug-ins available for real-time use within 3D packages, or development engines.

Lightbox systems-Ikinema Orion-Mid to high investment, real-time using tracking pucks, and/or hand controllers/VR HMD, small to larger capture volumes depending on kit purchased, mid quality tracking results, but can link with Ikinema Live (additional cost software), for real-time 3D package development use, and other more expensive real-time systems can link with this software also.

All systems can be viewed in action on YouTube videos, or the corresponding websites, but look for the RAW data capture results, many of these systems over hype their performance, or try to fool viewers with highly pre-cleaned post recordings as real-time and they aren't.

If you are looking for a lower investment system, with very good accuracy iPi software with either PS Eyes, or Kinect v2 sensors is a viable choice to get started with, but PS Eye RGB camera set ups require more room and better lighting to work properly, see iPi Docs/website/this forum, for more information.

I personally like my results from a 6 camera PS Eye set up @ 60 fps which Kinect sensor can not handle @ 30 fps for faster, more extreme, or more complex motions, even with dual, or multiple sensor set up, but 60 fps works better for continuous motion recording, subtle, or minimal motion recording can be affected adversely, but still not bad results after running all the iPi integral auto-clean tools properly.

For ease of set up and use though, dual Kinect v2 would be a good choice, especially for limited room and no lighting required, but dual Kinect v2 senors require 2 separate computers that will handle a Kv2 connection, due to the MicroSoft SDK limitations for use, so use of the Distributed Recording feature from iPi is required.

Hope this information will help some, but best to do your own research on which system will suit your needs and wants and area for capturing best.

...


Top
 Profile  
 
PostPosted: Wed Aug 22, 2018 8:39 am 
iPi Soft

Joined: Wed Jan 11, 2012 6:12 am
Posts: 2039
Location: Moscow, Russia
iPi Motion Capture employs markerless technology, so there is no need in high resolution picture to track markers. Required resolution depends on the capture volume. PS Eyes shoot at 640x480 and that's pretty fine for 7m x 7m area.
You are right that higher FPS allows to capture fast motions more accurately. But also it increases processing time per 1 second of video. So FPS should be selected in order to maintain good accuracy / processing speed balance.

Snapz wrote:
...
Organic Motion-Higher investment, real-time RGB camera and additional conversion hardware required, both of these have very good accuracy with proper set ups.

To our knowledge Organic Motion is out of business now. The info has been confirmed by some of our resellers.
https://pitchbook.com/profiles/company/54374-23


Top
 Profile  
 
PostPosted: Wed Aug 22, 2018 11:01 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 871
Location: Florida USA
...

Yes, seems you are correct Maslov, I was unaware of this change, it seemed to be a working technology, although more costly for the working set up, so now only one viable markerless RGB camera video tracking program, and it works well at lower resolution @ 60 fps with fairly easy set up and calibration.

I am wondering if they were bought out by a marker system like Optitrack, or markerless system like Dari, or Simi? No information in search results, or their social media pages of the reason, patent infringement possibly?

It seems to take 120 fps, or more to capture finer smoother results of more subtle motions, but in iPis' system of tracking the actual PS Eye video, this would cause an increase in the processing time and the storage on the drive, but it has shown to track well for the majority of motions recorded and is getting better at a much lower cost.

iPi has released an Action Cam recording option for building trackable iPiVideo format sequences, so is possible to get higher resolution and higher fps videos to track, but I haven't actually tested this from my end, I only tested the 4 camera sample recordings they released, so can not comment much further on that method.

Most real-time capturing systems are very expensive to get high accuracy outcomes and not really in line with small studio market.

...


Top
 Profile  
 
PostPosted: Wed Aug 22, 2018 1:02 pm 

Joined: Mon Aug 03, 2009 1:34 pm
Posts: 2292
Location: Los Angeles
At the risk of repeating info already mentioned here, here's my view on this topic:

If you're an indie artist with a relatively small budget, iPi is a pretty good system. The software is affordable, and the capture hardware is off-the-shelf gaming devices and components. For me, using iPi Mocap Studio started as a hobby for my personal projects way back when the original software was still in beta (when it was called iPi DMC,) but since that time I've occasionally applied it in commercials and vfx for feature films. It's still mainly a tool for my personal shorts but I'll probably use the system in another job if the opportunity comes up.

Anyway, what's kept iPi Mocap Studio appealing to me is not just the cost but also the compactness and convenience of the system. I can set this up in my living room at a moments notice and be up and running in a few minutes. With practice, you can record the needed footage and have it tracked, edited, retargeted and cleaned up in a few hours--quicker, if the motion is not complicated.

For me, what's worked best is multiple Kinect 2 devices. What I like is that it doesn't require special clothing or lighting, and it works in small spaces (like my living room.) The 'downside' is that each device needs it's own computer. (With older XBox Kinect you can use a single computer for multiple devices but the data quality is lower with that device.)

I also have a set of PS3 Eye cameras as a backup. These devices can record more accurately, but also require a larger capture space, and have special lighting/clothing considerations. The main reason the quality is higher with this device is the higher framerate. (Kinect is 30 fps, PS3 Eye is 60 fps.)

Because the Kinect records 3D data, you can use fewer devices than with RGB cameras. Data from two Kinects is comparable to data from four PS3 cameras. That's probably the practical maximum with Kinect though. You can add more Kinect devices but that's doesn't seem to appreciably improve the data quality. Adding more PS3 Eye cameras, on the other hand, does considerably improve quality.

The downside to that is, the more cameras you add, the longer the tracking process takes. So if you go with multiple RGB devices like PS3 Eye, make sure your hardware can handle it. A high-end gaming graphics card is probably the key here, especially if you plan to track a lot of footage.

Regarding resolution, having a high resolution is only important if you have a huge space to record in. Otherwise, it might just be a lot of wasted data that slows down the tracking process. It might be good to have the option but you're probably never going to need it. (Also bear in mind that a larger space is probably going to be more complicated to light appropriately.)

Beyond iPi Mocap Studio, It's assumed you already have basic 3d rigging and animation knowledge because you're going to need that to get the data onto your characters and be able to modify the animations or fix errors. This would be the case with any mocap system you decide on. Some users here use Max or Maya for their rigging and final animation. I've been Motion Builder for getting data into LightWave, but I'm transitioning to using Webanimate or iClone with 3DXchange to work with LightWave. (LightWave has a native retargeting system but I still prefer 3D party options for this.) Actually, Mocap Studio can retarget data directly your imported rig, but you'll still need rigging skills in another program to create the rig, whether it's for animation or game creation.

BTW, I'm not saying any of this to discourage you. On the contrary, I enjoy seeing more artists and animators take an interest in iPi Mocap Studio. I'm just giving you a heads up for what to expect if this is your first venture into mocap. Some mocap 'beginners' get into this assuming the process will be easy and automatic, but in reality, there's a lot to learn and many new skills to develop.

But if you' don't mind all the work, creating motion capture for your characters can be fun and rewarding.

Good luck! :)

_________________
Greenlaw
Artist/Partner - Little Green Dog | Demo Reel (2017) | Demo Reel (2015) | Demo Reel (2013)

Image
Watch a one minute excerpt on Vimeo now!


Top
 Profile  
 
PostPosted: Wed Aug 22, 2018 6:18 pm 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 871
Location: Florida USA
...

Here is a full speed recording with 6 PS Eyes recorded today, single pass tracked and my basic 1 pass processing for auto cleaning the animation right in iPi Studio v4.

This is the exported video taken right out of the program to show the resulting animation, applying only the hand Moves data.

Link: https://youtu.be/lGGsX1o6sTQ

This is what I generally achieve very easily with my PS Eye set up shown in the video, but this is also from years of working with the program and getting my capture area set up properly, using a 22 ft x 12 ft area (7 x 4 m), performance area generally used is 3 x 3 m, although I can go a little larger and still be fine.

Just to show a reference of the processing quality with iPi Studio v4, that took just under 1 hr. to process to this point, all done automatically by the systems integral tools.

I also believe iPi has worked on the processing frame rate drops when more cams are added, it doesn't seem like much of an impact as it used to be per camera, but I don't know this, just going by what I see it's almost the same frame rate from 3 cameras to 6 cameras now, but I can only go to 6 cams, they went to using the full 100% of the graphics card power which is better.

Tracking speed definitely is dependent on the model of the graphics card, this tracked @ 2.5-2.7 fps, all Actor settings maxed on Low Res tracking speed using a GTX 970 FTW and Refines @ 3.0 fps, though a GTX 1070 will double that speed. It was a 20 min tracking process and a 20 min Refine process, most of the time is involved in these 2 processes, but I cut the ROI down a bit for this video export for size.

...


Top
 Profile  
 
Display posts from previous:  Sort by  
Post a new topicPost a reply Page 1 of 1   [ 6 posts ]


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron


Powered by phpBB® Forum Software © phpBB Group
610nm Style by Daniel St. Jules of Gamexe.net