Change font size
It is currently Thu Sep 12, 2024 7:22 pm


Post a new topicPost a reply Page 1 of 1   [ 6 posts ]
Author Message
PostPosted: Thu Dec 10, 2020 8:00 am 

Joined: Thu Dec 10, 2020 7:53 am
Posts: 11
How can I obtain an image with is the RGB image of the depth camera (right now using Azure Kinect) with the superimposed "stick skeleton"? Pretty much the same as when using the kinect SDK, that you obtain the joints and also the X,Y position in image coordinates.


Top
 Profile  
 
PostPosted: Sun Dec 13, 2020 7:55 am 
iPi Soft

Joined: Wed Jan 11, 2012 6:12 am
Posts: 2355
Location: Moscow, Russia
If you are only interested in super-imposed image, not exact joint coordinates then you can take a screenshot from Mocap Studio, or export a viewport video (File > Export Video command). Just set appropriate view settings - leave only Video, Bones and Align Color Video to Depth options on in the View menu. And adjust the size of program window or width of the side panel for desired size of the viewport.


Top
 Profile  
 
PostPosted: Mon Dec 14, 2020 3:52 am 

Joined: Thu Dec 10, 2020 7:53 am
Posts: 11
I am interested in generating the image in a third-party software (mine), so using the mocap studio GUI is not viable.

Using kinect SDK I can get the skeleton joint coordinates aligned to the RGB image, so I can paint (say in an HTML interface or a windows GUI interface) the skeleton on top of the RGB image in real-time (I am using live preview).

Another related question is whether or not I can access to the RGB image if the kinect is being used by the iPi Recorder or is the device set in "exclusive mode". If that is the case, does it mean I would need to set an RGB camera and then calibrate the kinect(s) with that RGB camera so I can "transfer" the 3d coordinates to that camera?


Top
 Profile  
 
PostPosted: Mon Dec 14, 2020 5:31 am 

Joined: Thu Dec 10, 2020 7:53 am
Posts: 11
I saw in another post that live preview does not stream RGB images, and I guess there is no way to access the RGB image using another software (writing my own) if the device is in use by iPi Recorder, so the only alternative is to have a separate RGB camera.

So my question is:
In which coordinate space are the 3d root joint position (the one that is streamed from ipi motion through UDP)?

If it is camera 3d space, I guess I can calibrate another RGB camera with the kinect (or the main one at least) and then using the calibration matrix translate the position in kinect camera 3d coords to RGB camera 3d coords. Would that work?
If it is some other "world" coordinates, how can I know the transform between that space and camera 3d space?
What about with multiple kinects? Are the streamed 3d coordinates respective to a single camera? How to determine which one is it?

Thanks for your feedback.


Top
 Profile  
 
PostPosted: Tue Dec 15, 2020 1:50 am 
iPi Soft

Joined: Wed Jan 11, 2012 6:12 am
Posts: 2355
Location: Moscow, Russia
If you're satisfied with Azure Kinect DK tracking for your purpose then you don't need our software at all.
If you want to use result of Mocap Studio tracking in your software then getting realtime tracked coordinates is not a supported scenario at the moment (not saying about RGB frames).
Yes, Azure Kinect is used exclusively by one software at a time. Kinect 2 can be used by multiple softwares at the time.
Getting RGB frames and coordinates from different sources imposes a need of a time sync between sources not only calibration of camera coordinates.
We don't use camera's coordinate system in our tracking. You can determine position and direction of camera(s) in our coordinate system from .iPiScene file (it's a simple XML-based format) or just looking at props of a camera at the Scene tab in Mocap Studio.


Top
 Profile  
 
PostPosted: Tue Dec 15, 2020 8:23 am 

Joined: Thu Dec 10, 2020 7:53 am
Posts: 11
I am not satisfied at all with kinect sdk, so that is why I am using your software, which is faaaar better, and realtime. Congrats on getting this accuracy.

I managed to calibrate a rgb camera with one of the cameras used in ipisoft. Right now I can stream the data and update a 3d model with it (including the root position, which I understand puts it in the correct world coordinates).

So right now what I am left is getting the pose of the kinect in world coordinates (ipisoft mocap). I see that I can get them from the Scene tab when I load an iPiMotion file, but I need to get this information while on a Live session. How can I get that? And what is the order of transforms to go from pan/tilt/roll to a rotation matrix?


Top
 Profile  
 
Display posts from previous:  Sort by  
Post a new topicPost a reply Page 1 of 1   [ 6 posts ]


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron


Powered by phpBB® Forum Software © phpBB Group
610nm Style by Daniel St. Jules of Gamexe.net