Rich wrote a kinect driver for Player awhile back. It basically provides access to much of the relevant data (RGB images, grayscale depth images, etc.) You can use this to collect the data. After that, it's up to third party stuff to do the mapping.

I've never used it, so I'm not sure how the images are published from the sensor. It's something you'll have to look into. Either that or Rich Mattes or others will have to comment on this thread. 

I feel like if you have access to ptz (pan tilt zoom) and the accelerometer data, it is not outside of your means to write a program to map the recorded images to 3d space. Others have used the Kinect for 3d mapping before, but I'm not sure if their source is publicly available. That'll take some looking so:

Good luck,

On Mon, Jul 25, 2011 at 11:04 AM, Hunter Allen <> wrote:

Hello all,

I was wondering if there is any way of mapping in 3D with player. Preferably
using a camera. I am aware that ROS has a driver that maps with the kinect
camera. I also know of a 3D laser point cloud driver in player. Is there any
way to generate a map using either of these? Also, is there a driver similar
to the one on ROS for the kinect camera?

Many thanks,
-Hunter A.
View this message in context:
Sent from the playerstage-users mailing list archive at

Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to
the right place. Try It Now!
Playerstage-users mailing list