Can anybody help steer me in the right direction... I'd like to use the greyscale values of the kinect depth data as an alpha mask for a quicktime video so that I can do a live 2D projection map on a silouette with a projector...
Once I get that working I'd like to be able to easily invert the mask so I can project on either foreground OR background only.
So I'd like to mask a video clip with the constantly updating pixel array as the mask, in particular... kinect depth data. What would be the best way? Does the Video class extend PImage?
Currently to test, I'm just using shiffman's AveragePointTracking example (could as easily be the point cloud example as well) as a basis.
Basically just trying to project video onto a moving foreground object.