Change font size
It is currently Wed Sep 18, 2019 9:51 pm


Post a new topicPost a reply Page 1 of 2   [ 12 posts ]
Go to page 1, 2  Next
Author Message
PostPosted: Thu Mar 17, 2016 4:05 am 

Joined: Wed Mar 02, 2016 6:10 am
Posts: 13
Location: UK
I've been managing to get some pretty decent, albeit mixed results from my Kinect v1 three camera setup, but I keep hearing that PS Eye setups are better and need less foot cleanup. I have 16 PS Eyes available and the Pro Licence but initially trying to get just 6 setup and running correctly. Yesterday I managed to get them calibrated first time with 'Perfect' on all check boxes and shot some test takes. However, as soon as I hit Refit Pose the actor literally screws himself up into a ball and flies off out of the capture area. So then I try manually aligning him in the camera views.....but I notice in one or two cameras the footage feels slightly misaligned. I don't know if this is a normal amount of allowance and to be expected? Eventually I can get it tracking by manually aligning the actor but the results are very very disappointing and not nearly as good as the Kinect setup.

The other problem is that the auto detect clothing colours is not working and I have to manually pick them myself. Would this suggest that the lighting and or clothing/background is causing a problem? Would it make any difference if I turn off Show Background when tracking? I'm at a loss with this setup and have just ordered a couple of Kinect v2's as the depth sensors seem a lot more reliable. Is it really worth persevering with different clothing colours? Could it be that the room just isn't suited to PS Eye?

_________________
http://www.vimeo.com/117568275


Top
 Profile  
 
PostPosted: Thu Mar 17, 2016 5:33 am 
iPi Soft

Joined: Wed Jan 11, 2012 6:12 am
Posts: 2142
Location: Moscow, Russia
Hi
PS Eye setup requires more practice to use it efficiently. Once you get used with PS Eye tracking, it will seem not much harder than Kinect except for increased processing time. But results are usually worth the efforts.
Have you read our online docs on PS Eye tracking yet? They cover the tracking process in detail.
http://docs.ipisoft.com/User_Guide_for_Multiple_PS_Eye_Cameras_Configuration#Processing_Video_from_Multiple_PS_Eye_Cameras

PhilRowe wrote:
However, as soon as I hit Refit Pose the actor literally screws himself up into a ball and flies off out of the capture area.

Before starting the tracking process, you should set up the actor model (proportions and colors) and roughly align it with the video.

PhilRowe wrote:
but I notice in one or two cameras the footage feels slightly misaligned. I don't know if this is a normal amount of allowance and to be expected?

Depends on what you call slightly, screenshots may help :) Some degree of misagnment is expected. However, this also may be an indication of problems with camera coords. For instance, cameras were moved since a calibration video has been recorded.

PhilRowe wrote:
The other problem is that the auto detect clothing colours is not working and I have to manually pick them myself.

It usually works well once the actor model is reasonably aligned with a video. However, it takes into account only an active camera (which frame is displayed in the scene). Due to non-uniform lighting colors in different cameras may vary substantially. And that leads to poor auto-detection as well as tracking problems.

PhilRowe wrote:
Is it really worth persevering with different clothing colours? Could it be that the room just isn't suited to PS Eye?

Distinct colors for different body parts (arms, torso, legs) are very helpful for PS Eye tracking. Cannot say about your specific clothing and room without seeing them.

I could provide more specific help if you post the links to one of action iPiVideo files and scene file with camera coords detected during calibration.


Top
 Profile  
 
PostPosted: Fri Mar 18, 2016 10:27 am 

Joined: Wed Mar 02, 2016 6:10 am
Posts: 13
Location: UK
Hi, thanks for your reply. Unfortunately I can't upload any footage or screenshots due to NDA's.

This morning (and quite a bit of yesterday) I tried to calibrate a dual kinect 1 setup using both the maglite and the board methods, with no success :( We have large windows on both sides of the office that I am unable to blank out and I think these are causing the issues. As I said in another post on the suggestions for new features, I have no problems calibrating the PS Eyes with a maglite and scene set to darken. Is there any reason the kinect can't have a similar darken function for calibration?

Out of frustration I went back to the PS Eye setup which is all taped down and hasn't moved since I got the 'Perfect' calibration. I shot some more footage, repositioned the actor to match the video footage from camera 1...this time the clothing colours were recognised OK with the Auto Detect button. But again as soon as I press Refit Pose the actor goes into some weird poses. I did an undo and had a look to see if I could line the actor up better in camera 2 and noticed that he was about 30cm away from the video footage. So then I went through each camera looking to see how the other cameras lined up with the footage. It seems that despite getting 100% Perfect all green calibration its not actually calibrated properly on cameras 2 and 6:

Camera 1: All other visible cameras (3, 4, 5, 6) appear perfect
Camera 2: All other visible cameras (3, 4, 5, 6) are way off to the left by about 1m
Camera 3: All other visible cameras (6) appear perfect
Camera 4: All other visible cameras (1, 2) appear perfect
Camera 5: All other visible cameras (1, 2, 3, 4) appear perfect
Camera 6: All other visible cameras (2, 3) are off to the right by about 30cm

I will try recalibrating again next week and buy some colourful clothes....

_________________
http://www.vimeo.com/117568275


Top
 Profile  
 
PostPosted: Sat Mar 19, 2016 7:59 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

When positioning and pointing PS Eyes at the capture volume, you should have a center point marked on the floor that can be seen in every cameras view port clearly, then point all cameras to this point with its view at approx. chest height, for most standard sized performers from 5'2" - 6' using a 1.20m - 1.80m variable off ground height for at least 3 cameras, then one high camera for feet tracking at 2 - 2.5m closer in pointing downward on the performer. (more cameras can be used to fill in any spots higher or lower that you wish, but each cam added will add to the tracking processing time.

Also what are the room dimensions you are setting up in?
PS Eyes require a minimum area to get a reasonable working capture volume, or you will be stuck to very limited actions, raising cameras higher can increase the volume more, but you may loose tracking quality.

I found it best to set at least 3 cameras in triangulation at approx. chest height, triangulation doesn't need to be exact same spacing between each camera, but better if they can be placed in some kind of symmetry.

The other 3 can be set where you wish in between this triangle, but is better if you use 2 directly side viewing to better capture the depth of movements, forward and backward and have at least one cam very high pointing downward to better capture the feet and keep them more flat on the floor, or they will tend to want to twist the ankles weirdly during tracking more than without this high view. ( depending on ceiling height of the room, you could have a camera also directly pointing down on the performer, but this requires a minimum 10' or higher ceiling height to work optimally. (I personally use 2 side cameras at a 2.40m height, closer in to the center and pointed more just to capture the lower back and hips and feet in upright positioning, but also works better on full ground actions of the full body).

Five to six cameras is really all that is needed for single actor tracking, (I use Basic version, so maxed to 6 anyway), and keeping processing times reasonably well using a good GPU, but more cameras may be needed for multiple performers tracking, especially if actions are in close proximity of each other.

You will also get much better tracking results using tighter fitting clothing and deep saturated colors. (solid medium blues, greens, black and red are good choices)

Very bright lighting isn't needed, especially pointing right on the performer, in fact it is advantageous, you really just want consistent ambient lighting in the room, but enough so the cameras can pick up the colors closely in each camera, although each camera will see the colors a little differently, (if you have a light directly over the performance area, turn it off or disable the bulbs while recording for better results, this can cause a washed out color appearance and adverse shadowing on the video, PS Eyes are very touchy to this).

The use of a tight long sleeved shirt under a T-shirt can help if room isn't all a neutral color, but in my case I stopped using the long sleeve and it tracks the performers bare arms fine, with wearing black gloves and the use of Move controllers in the hands.

If you have the minimum room size fulfilled, with good lighting and the proper clothing, the program should take over with out much more effort and calibrate perfectly with a missed light marker detection of .15% or less and a a re-projection amount of less than 1.

There should be no reason to ever mess with manual camera adjustments in the Scene tab, you will just mess things up, the program will position the cameras in-scene where it sees it needs them to be, even if not at the exact height off the floor as your measurements, but when you view them in the Studio for tracking, they should line up the actor very close to the performer in every cams view throughout the video with the world cameras being positioned very close to where the scene cameras are on the video plane, a little off doesn't matter, but very close.
(The closer you move to one camera and away from others will change the iPi actor to video appearance, but that's usually fine as long as you stay a minimum 2.5 - 3m away from any one camera).

If they aren't, you have something wrong or a bad camera, because all should fall right in place with very little effort, but be aware that some PS Eyes are not stiff on the mounting plate and can drift down without your knowledge throwing things off later, just be aware of this, or tape them in place more securely if need be.

You should try to start your recordings from the center point on the floor to colorize and refit your actor, (it's easier, but not required), you can move to wherever in the capture area you want after and just cut that portion in the ROI to start the action where you want and re-fit the actor again to that stance.

Once you get it set up and work through the learning curve more you can record and track some really good animations with less tracking errors and at much faster speeds, this of course also depends on the model of the GPU used, as iPi relies greatly on a good machine with a good GPU for better results and tracking speeds.

I only use 6 cameras to record very high energy motions and dances with very good quality tracking outcomes, (view some animations in the Videos Index), you will always have clean up, but should be far less and a more natural look and flow to your animations than most other pro-sumer mocap systems in its range, especially for an off the shelf camera system.

It may be a bit tedious to get to the "sweet spot", but once there you will find the outcomes much more appealing to you.

All users use the program for different aspects and have their own ways that they find best, but there are many tips and adjustments that can be used during the tracking phase to help make the process cleaner looking, or you can just do as some do, and make all corrections later in your 3D editor of choice, I use my own animations, so I try to get them as clean as I can inside iPi Studio before I export them, even if this takes a bit more time inside iPi Studio, as iPi does much, (not all), of the work automatically when used optimally. (My opinion)

Outcomes also depend greatly on the characters used and the quality of the weighting on each mesh, so there is no real one size fits all method, animations layers are your friend :)

Hope you get the PS Eyes set up working optimally, as it is a good system in my opinion, and I do what's stated above with great results, but your results may vary.

...


Top
 Profile  
 
PostPosted: Mon Mar 21, 2016 3:17 am 

Joined: Wed Mar 02, 2016 6:10 am
Posts: 13
Location: UK
Thanks for your reply Snapz! Great stuff, very informative and much appreciated.

I should probably say that I'm not new to motion capture. I've been using Vicon systems for the last 10 years. Whilst I don't expect to get the same level of fidelity out of this system I'm very interested and excited to see how far it can be pushed. I think it seems entirely possible that one or two of the cameras have 'slipped' or been knocked by someone so I'll give the PS Eye setup another try this week. I need to buy some long sleeve t-shirts first! :)

Today I'm going to try out one Kinect v2 as it doesn't require calibrating and I'm curious to see how it compares to the three Kinect setup I had working previously until I had to dismantle it. Unfortunately the room I'm setting up in (which is temporary) has big windows that blind half the cameras with a 'bloom effect' when I try to calibrate with a maglite so the whole calibration process is proving to be a bit of a nightmare.

_________________
http://www.vimeo.com/117568275


Top
 Profile  
 
PostPosted: Mon Mar 21, 2016 6:57 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

I have used Vicon also out of a Studio a few years back and it is quite different from iPi and much more complex after the capture in post processing with actually not as good of results as I get with small little off the shelf, 6 camera system and its ease of post processing since I moved to my in-house smaller "studio".

With only one Kinect V2, you're not gonna get much out of it compared to 3 Kv1, you will have a wider FoV, a little better video quality and a little better feet tracking, and be able to move a little closer to the camera due to the wider FoV, (which for its users I guess is a big leap forward), but neither Kinect camera would keep up with what I needed from performances, and the clean up was much more, (and when I personally use my animations, not just make them for sale for other to worry with the clean up, I like the best I can get out of a system before it goes into post editing), so I just stuck with the PS Eyes set up, no sense to have both, since my PS Eyes are permanently set up in fixed positions and never move and I can use them day or night without issues.

2-Kv2s will not perform as well as 5-6 PS Eyes, (or more), in quality of tracking, (I haven't tried 4-Kv2, simply because of the need for 4 separate compatible machines to run them), but many people do use Kinects for ease of set up and less stringent clothing/lighting requirements, but I guess quality is in the eye of the beholder and other factors that make PS Eyes less practical, both systems will give you motion capture animations though :)

Both set ups won't work optimally with bright, unshaded light in the background, more from the UV rays with the Kinects, color washout effect with the PS Eyes, although my studio has windows on 3 sides, and a 4 ft roof overhang with roll down plastic slat shades that work as a back lighting effect in the videos, without much actual UV light interfering during the tracking process. (I still have to use low wattage internal lighting if it's cloudy out, because I use lower exposure and gain settings than the default).

With the need to use a separate compatible machine for each Kv2 and the distributed recording, wasn't an optimal way, but the only way right now, I am waiting for Greenlaw to put out some demonstration videos of a 3 or 4 Kinect V2 set up when he gets time, or anyone else for that matter that is running a 3-4 Kv2 set up, just to demonstrate the (unedited in 3rd party 3D editor) tracking results on a character.

Still in order to get a workable capture volume, even with just the dual Kv2 set up, you need a 16-20 ft length spacing between them with the 180 degree set up, or you will limit your performance area, or use the 90 degree set up, but to me doesn't work as well, but you still need equal spacing for Max capture volume, which is less than the Max volume that the PS Eye set up will capture and track in by many feet if you had the studio floor area to do so, or set up outside to take advantage of this as you have before.

Of course they are 2 different systems that put out 2 different results, just depends on which you prefer and what you want to perform in post process editing.

...


Top
 Profile  
 
PostPosted: Tue Mar 22, 2016 11:06 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

Oh, by-the-way, when using the multiple PS Eyes, I wanted to mention that when you view your video upon opening in Studio, all of the cameras WILL NOT show the actor to performer matching up in the view ports before refit, this is fine.

I just didn't want anyone to think that before you refit the Actor, all cams should show the Actor exactly positioned over the performer on the floor grid, this will never happen! and usually the floor grid looks wrong, until you actually refit your Actor, then should all come into place correctly.

I would also suggest using the flexible or very flexible spine setting BEFORE you hit refit and throughout the tracking process when tracking for human motions and sitting actions you should always choose flexible spine only, NOT very flexible spine, as it will cause an "ants in the pants" appearance on play-back.

I am sure the below "trick" isn't necessary, but I always like to use it, works with PS Eyes or Kinects if the floor grid doesn't come in square to the video scene:

I hope it's understood what I am saying here, but the trick BEFORE you hit the colorize or refit button, (because the floor grid usually comes in at an angle to the scene), is to align the Actor closely in the camera you will use as your primary tracking cam, (you should always try to track with primarily one or two cameras throughout, usually the starting front Actor facing ones), but once the Actor is closely aligned in that cam, (DO NOT hit the colorize, or the refit button yet), ONLY hit the re-center Actor co-ordinates button in the Scene tab, then very slightly turn the Actor again with the Move tool selected, left or right, and hit the re-center button again and the Actor should snap right over the performer closer in that cams view, with the floor grid set more squarely to the room in all cams, use a reference line in the video to set it.

The SECOND re-center click causes the "snap" effect, then you can slightly turn the Actor several times to get the grid more square to the room, and if done right, it will make it easier to use the IK tools to adjust bones when needed....

(I also do this, but not really required), I slide the Actor left or right and hit re-center again to slide the floor grid to match my center point on the floor more closely, moving Actor left moves the grid right, the "snap" effect will occur again when hit re-center button, do this until center grid is closer to center of floor mark).

Then load your Actor file or set up a new Actor and align the bones in scene, auto-detect colors and hit the refit button. (Once done to liking, DO NOT hit re-center button any more, when/if, you move your Actor to refit at another point in the ROI!).

Just thought I would mention this, I am not sure how many users know this can be done.

It is also fine when using the PS Eyes to see the very bottom of the video performers feet under the Actors feet after refit, this is actually how it should look when the floor height calibration setting is correct. (floor height in scene should not be needed to be moved up or down more than 1 cm, but if you do, Actor height should be adjusted accordingly also).

To me, a correctly set up and scaled Actor is vital to get the best tracking with PS Eyes, if you are getting an extreme amount of tracking errors, when you feel you shouldn't be, something just isn't quite right possibly with the Actor scale, but you will most likely experience some quirky upper arm spins during tracking, some may need to be pulled on the IK and refit to be corrected, but the most part the jitter removal will straighten them out, or using the Refine button can help in those areas.

Good Luck!

...


Top
 Profile  
 
PostPosted: Wed Mar 23, 2016 6:50 am 

Joined: Wed Mar 02, 2016 6:10 am
Posts: 13
Location: UK
Good info, thanks! I will probably get around to trying the PS Eye setup again in the next couple of weeks so I'll refer back to this and let you know how I get on. I was able to get some surprisingly usable data from the Kinect 2 the other day so I have a fair chunk of cleaning up to do.

Cheers!

_________________
http://www.vimeo.com/117568275


Top
 Profile  
 
PostPosted: Wed Mar 23, 2016 9:15 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

The single Kv2 worked pretty well for basic actions, I even did show a couple of samples in the Videos Index of my results with it just using the integral auto-clean tools in iPi on a "Heavy Style" auto-rigged character, even showing 360 degree spins, with pretty good results.

Then I tried the dual set up at 180 degrees and the results were even better, I just had to reconfigure an older PC, (with a new USB 3 Motherboard and i3 processor), I had laying around to handle the Cam, was $250, but I didn't worry since I now have a brand new Win 10 computer to use for the family, since I resold my Kv2s.

Then I attempted some very complex, fast actions with it and they showed their limitations during the tracking process and after, but I didn't get a chance to try them with the recent update made for the Kinects, I do still have 1-K360 I threw back on to test the new update and I thought I noticed some improvement in the feet stabilization during tracking, but I didn't go full out recording to actually use the data.

The most problem I see with the Kinect is that the movements aren't really as natural as I get with the PS Eyes, so yes, to get that there is more actual corrections needed during post processing, most is seen in the legs where I call it "The Lazy Leg Syndrome", where the knees/hips/spine don't really bend/twist naturally in many different longer sequence cycles, (using the same setting parameters I use with the PS Eyes), and also the need to use a Move controller on the head, or the head tracks poorly without this Move, or you will have to manually add all the correct head rotations later.

I agree Kinects are easier to set up, if you don't have a permanent fixed set up for the PS Eyes, or if you just need some captures quickly of basic motions and getting usable data with faster processing speeds from slower machines, this also makes them more appealing to a wider range of users, although the Kv2 needs a compatible machine for each cam, so I don't know how cost effective that is for many.

iPi has put the Kinects to far better use than any other program has so far and many love the outcomes of their captures, I just don't need two set ups, so I went with the PS Eyes for more versatility and less clean up, which is mostly just in the arms, I hardly have to touch the rest of the body, and if I do it's usually just a simple global layer adjustment in certain areas and I don't get as much of the "floaty" appearance from the hip translation as from Kinect data with more work to correct.

It would be nice if you could post some shared video upload links here on your attempts with either set up, especially if you can get the PS Eye set up working, it's easier to visualize any issues you may be having and corrections that may help.

Good Luck with either set up!

...


Top
 Profile  
 
PostPosted: Wed Mar 23, 2016 4:42 pm 

Joined: Wed Mar 02, 2016 6:10 am
Posts: 13
Location: UK
Thanks :) Unfortunately I can't post any kind of work related material until it is in the public domain, which makes things a bit difficult in terms of problem solving. I will start doing some home experiments with all these setups and post results. I also have 2 360 Kinects, 2 Kinect v2's and 6 PS Eyes at home.

I'm finding with the depth sensors that I can get very usable data from the knees up. The feet are all over the place even with 3 Kinects but it's relatively easy to tame them in MotionBuilder with the solving set up for floor contacts on the feet and toes, damping stabilisation etc....and then I'll usually just use what's left as reference, delete and handkey where necessary. The spine and shoulders usually needs reposing but this is something I've come to expect even when working with very clean data from hired mocap shoots. In terms of the 30fps capture rate I'm really surprised how well the Kinect 2 performs; I did a very quick test shoot of some martial arts moves at home and it matched it very well indeed. Generally I will speed up all my mocap by around 10% anyway and then re-time and repose for dynamic effect.

I'm really interested to see how the PS Eyes perform and I've ordered some colourful plain t-shirts. Even with just the depth sensors this is a godsend of a program. Now I don't have to plan and wait 6 months to do a mocap shoot. Now I can get up from my desk and capture it on the spur of the moment. Which is actually one of the upsides to the depth sensors...it means I don't have to wear colourful t-shirts to work ;)

_________________
http://www.vimeo.com/117568275


Top
 Profile  
 
Display posts from previous:  Sort by  
Post a new topicPost a reply Page 1 of 2   [ 12 posts ]
Go to page 1, 2  Next


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  


Powered by phpBB® Forum Software © phpBB Group
610nm Style by Daniel St. Jules of Gamexe.net