Change font size
It is currently Sat Dec 07, 2024 3:56 pm


Post a new topicPost a reply Page 2 of 3   [ 21 posts ]
Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Wed Feb 13, 2013 10:25 am 

Joined: Wed Oct 10, 2012 3:17 am
Posts: 84
I picked up all of my PSEyes from Game/Gamestation stores here in the UK for £1 each


Top
 Profile  
 
PostPosted: Fri Aug 09, 2013 3:34 am 

Joined: Thu Aug 08, 2013 9:58 pm
Posts: 24
Is there any way to use 2 kinect with PS3 cams? Imagin 8 PS3 cams + 2 Kinect


Top
 Profile  
 
PostPosted: Fri Aug 09, 2013 5:13 am 
iPi Soft

Joined: Wed Jan 11, 2012 6:12 am
Posts: 2355
Location: Moscow, Russia
Mixed depth+color configurations are not supported yet.


Top
 Profile  
 
PostPosted: Fri Dec 29, 2017 5:04 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

This is an old post... So

Now that the Kinect X Box One is in use, they are showing great results with iPi, even if using only a single sensor, but of course dual sensors are better for optimal tracking results, although 2 computers are required using distributed recording. (Each additional sensor requires its own computer).

Still, single or dual sensors is an easier set up then 6 - PS Eyes, especially for room constraints and special requirements, or if have to break down after each recording session, or need easier mobility, but always comes down to what types of actions you need to record and how much you want to spend, but the need for extra computer(s) to run dual, or more Kinect v2 kind of kicks cost in the rear anyway, as not just any low budget desktop, or laptop will run a Kinect v2.

...


Top
 Profile  
 
PostPosted: Mon Mar 12, 2018 11:34 pm 

Joined: Mon Nov 06, 2017 11:50 pm
Posts: 28
For starters. I'd recommend depth sensors/Kinects. It's easy to set up & calibrate!

For those seeking high-quality motion capture. Playstation eye is a way to go! It lets you track fast motions. Such as fighting, energetic dancing & more!


Top
 Profile  
 
PostPosted: Thu Sep 13, 2018 6:24 pm 

Joined: Wed Jan 24, 2018 7:40 pm
Posts: 9
Location: Texas
Here is a one-month-old content that I found on Tech Radar hoping that it can help you decide which one to get.

https://www.techradar.com/news/gaming/c ... -1127315/5


Top
 Profile  
 
PostPosted: Tue Dec 18, 2018 8:31 am 

Joined: Sat Dec 15, 2018 10:32 am
Posts: 4
I loked at this test with 10 PS3 Eye camera and I see it still jumps a little

https://youtu.be/1-jmUafH0Gs

Once my intention is to reduce the need of retouch, I'm back to the question...

If I use 6 Hi Res Cameras like Go Pro at 720p 60 Fps, instead of 6 PS Eye ( only 320 pixels ) will the bigger resolution give me more accurated result? Or it does not matter?

For me, it's not a problem if the computer takes the entire night to process, I need fine motion without jumps.

So, the question is... Does ipisoft computes high resolution images inside? Or does it have a limit? Does hi res images really give more precision?

Would a black cloth ( gym knitwear ) with some white line skeleton drawn on it help the optical processing?

What should I do to get the maximum quality capture?


Top
 Profile  
 
PostPosted: Wed Dec 19, 2018 5:39 am 
iPi Soft

Joined: Thu Apr 09, 2009 6:44 am
Posts: 199
Hello,
higher resolution gives some benefit, but it is rather minor.
This is because the algorithm matches the whole model to video. So resolution becomes benefitial only in case of very big capture area (when actor is 7+ meters from camera).
If you do not plan to use such big room for capture, Sony PS3 Eye with VGA resolution will give you the same result in terms of quality.
It is more important to have ambient lighting and good color contrast between actor clothes and background. Also use different colors for pants, shoes, torso, sleeves.
More details here:
http://docs.ipisoft.com/User_Guide_for_ ... ye_Cameras

Also when you start actual using of our system, pay attention to this section:
http://docs.ipisoft.com/User_Guide_for_ ... ye_Cameras

_________________
iPi Soft


Top
 Profile  
 
PostPosted: Wed Dec 19, 2018 6:05 am 

Joined: Mon Aug 03, 2009 1:34 pm
Posts: 2423
Location: Los Angeles
STAIRSFILMS wrote:
Would a black cloth ( gym knitwear ) with some white line skeleton drawn on it help the optical processing?

No. The way the software works is that it looks for whole a human figure in the footage to constructs a 3D human form it can track with. Details like a 'skeleton' drawn over the clothing would just confuse the shape the program is looking for.

For RGB video capture, it's best to have solid colors. For the upper torso, a red or green long sleeved shirt with a black t-shirt pulled over works well. This helps the software separate the arms from the torso, and the black shirt 'hides' shadows cast by the arms. Dark pants for the legs (doesn't have to be black, just a dark color,) and dark shoes. No need for gloves on your hands. Avoid glossy, reflective surfaces in the clothing or the background.

Bear in mind that you need contrast with the background, so pick colors that don't blend with the background. Mocap Studio will attempt to subtract the background but why make it harder for the software?

BTW, the above is another reason why I currently prefer depth sensor recording.

With a depth sensor, color does not matter, only the shape. In this case, close-fitting clothing is all I need. With Kinect, larger shoes seem to help with feet tracking, so I usually wear boots. That's pretty much it. This is one of those convenience factors with depth sensors that I mentioned in the other thread.

Actually, with depth sensors, certain colors or surfaces can matter. Generally speaking, you want to avoid solid/flat black because it can absorb the IR rays instead of bouncing them back to the sensor. You also want to avoid glossy reflective surfaces, which might reflect too much room light. This seems more of an issue with older sensors than Kinect 2 but, again, why make it more difficult?

The trade-off is that it's not as accurate as PS3 Eye but it still works pretty well and it's easier to use in a small space. On my personal and freelance projects, I'm usually working solo, so I try to keep things simpler where I can.

Hope this helps.

_________________
Greenlaw
Artist/Partner - Little Green Dog | My Demo Reels (2013,) (2015,) (2017,) and (2019)
Image
Watch a one minute excerpt on Vimeo now!


Top
 Profile  
 
PostPosted: Sun Apr 28, 2019 8:46 am 

Joined: Thu Sep 04, 2014 9:47 am
Posts: 897
Location: Florida USA
...

Just to be clear, the YouTube video referenced is very old, with use of iPi V2 it looks like, from July 2, 2014, v3 came out in Nov. 2014, but v4 has resolved a lot of issues in tracking problems from v3 and is much better IMO, but if using PS Eyes and the user doesn't want to follow the strict requirements for recording and tracking, that would be the users fault, not due to the programs capabilities.

(See any of my v3 or v4 sample videos in the videos index, some recorded right from iPi Studio view port and some recorded after importing into Autodesk FBX Review program).

These were achieved with very little work inside iPi Studio, basically only tracking them, then using the integral auto-clean tools provided, the v4 tracking is much more accurate now, especially the feet being more solidly tracked to the floor planting, done with use of 6 PS Eye cams, ever since 6 cams were added for Basic version anyway, some older posts videos were done "with less quality" with 4 cams, when I was learning with version 2 Studio.

With iPi, PS EYE cams can only record at 60 fps max., and your system MUST maintain very close to 60 fps on each cam during recording to not cause issues later with dropped frame skipping in the tracking, so the smoothness you would get from 100, or 120 fps higher end programs recording will be diminished a bit, but the iPi resulting animations are still quite good and very usable...

There is a specific process that should be followed to achieve better quality tracking and the WHOLE power of the iPi Studio MUST be used for best results, if you want fast error free captures from a single pass, that is not even achievable with the highest end real time programs, so compare apples to apples with any other lower cost mocap solution and you will see how well iPi actually works, "when used correctly".

Although it isn't "real time" tracking, (which causes more headaches in clean up than to simply wait for iPi to do it's thing within itself), simply keeping the performance takes shorter, 1 minute or less, (even if the entire recording is much longer), and by spending a bit more time using the post processing capabilities within iPi Studio v4, which means reviewing your tracking several times and fixing areas you feel need improvement prior to exporting, which is very easy once you understand the system and the process, which can be isolated to specific areas of the ROI and should blend back into the prior tracking seamlessly, if done correctly and with care, and eventually you will understand the benefits of doing so and get more experienced and faster with it, but this should be only in limited areas, or in some extreme motions, most should track fairly well on the first pass, or there is something not quite right in the camera set up, or lighting, or clothing colors, or clothing fit, the requirements for PS Eyes usage must conform to the proper specifications, not expect the program to conform to the users wants.

As a reference, we can now track a 1 minute recording, or part of a recording take, at 2.5-2.7 fps tracking speed using low res option speed setting, (all we can get from our GTX 970 FTW card), all options for head, shoulders and spine set to max capability, in under 1 hour, fully refined, jitter free and exported and with limited post cleaning needed, but you will always have something to clean up a bit and depending on how accurately the performance was done to match the characters used on, this is a point that many mocap performers ignore, but is very important for outcomes later.

If it is possible to mount your cameras in solidly mounted fixed positions correctly, and never move them, (though not required), then the program is basically plug N play after that, no real need for tedious pre session calibration each time, (takes me about 5 mins from plugging in my 6 cams, to actually recording a performance), as long as you understand the cameras were never moved, they will always fall within acceptable calibration per recording session day, but you can re-calibrate at any time after also, if something seems off, as the calibration per day is calculated to what the cameras see during that days performances, and can be done even after the recording was made, AS LONG as it is done before any cameras were moved, move the cameras first after a performance session and it will loose any possibility of proper calibration for the prior recorded videos, so be highly aware of this. (Best for safeguard to do a calibration before and after a session, if you have any concerns).

There are some advantages for some users with the Kinect v2 sensors, but to me better quality of captures isn't one of them, and 30 fps max. recording, it's more of an ease of set up and use, as has been explained above.

One IMPORTANT note to mention: iPi specifically uses a directional light inside the program which the real world studio illumination must match this directional lighting set up and be brighter, but not extremely brighter, from one direction lighting the performer from that position, (usually from the side, rather than straight on the front of the performer we find), and the scene light in iPi MUST be positioned, either by clicking set to cam position button if acceptable, or manually moving the scene light by click hold and drag the blue light text position box to make it close in space to that light also, or you can get worse results during tracking than could be achieved than if this is set up correctly.

Try not to allow too much scene shadowing in the far reaches of the cameras background vision, best to have more lights of less brightness, then fewer very bright ones, this comes into play more when trying to get the maximum capture volume use of 7 x 7 m (20 x 20 ft) and light the area and performer correctly for better results.

Direct down-shining ceiling lights will not give the best achievable results either, especially if directly over the center point of the capture volume, extremely bright lighting isn't required and actually degrades the tracking quality of the system, another aspect to keep in mind when using PS Eyes, is to never allow the lights to shine directly into the lenses of the cameras, place them higher and pointing downward and shield them if needed, as PS Eyes are very sensitive to bright light and will wash the recording color out and give poor tracking results.

As to the higher resolution cameras, they are only really higher resolution closer to the camera, and iPi tracking doesn't need that really, meaning if you would ever be 30 ft away from a higher resolution camera, the size of the performer and quality of the recording will diminish, which will still diminish the tracking ability, the iPi trackers have a hard time in long distance smaller actor tracking, especially in the feet and arms, so it is better to just remain in a smaller confined capture volume for better results, but a 15 x 15 ft. area using 6 PS Eyes should work well, if all other aspects of the set up are followed, just depends on how many cams are required to fulfill the users required recording properly.

Hope this addresses some of your questions.

...


Top
 Profile  
 
Display posts from previous:  Sort by  
Post a new topicPost a reply Page 2 of 3   [ 21 posts ]
Go to page Previous  1, 2, 3  Next


Who is online

Users browsing this forum: No registered users and 47 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron


Powered by phpBB® Forum Software © phpBB Group
610nm Style by Daniel St. Jules of Gamexe.net