I've just noticed that there is a new 1.6 version of the Microsoft SDK for the Kinect
New SDK that extends the Depth Data further than 4 meters. I'm wondering how this might effect the depth data that Processing can get. Does Processing already have a way to gather data further than 4 meters with the Kinect? Or does Processing need the Microsoft SDK depth information?
I would like to use Processing to project images on dancers upon a stage which will mean the Kinect will need to be further away from the stage than 10 feet, can this already be done with Processing and the Kinect sensor?
I am planning on purchasing a projector this week to start testing sketches I've been experimenting with this last month. I suppose I'll figure out how far the subject can be from the Kinect with Processing, but if anybody has more info on how depth is determined in Processing, I'd be grateful.
I've done a forum search on Kinect, distance, depth data, distance.. But the ones I could find were problems people were having that haven't been answered yet.
1