Loading...
Logo
Processing Forum
hey all,
so openCV has those really nifty functions built into the API like absDiff or whatever it is, and I'm wondering if there is a Kinect library that does the same?
All the frame subtraction I've come across is for RGB but really all I need is greyscale. I have a few questions:

is it possible to invoke the loadPixels() on just an image, or does it capture the entire window? once I have my pixels[] array, can I convert that back into a PImage?
and in general, what is the fastest/easiest way to do frame sub on a PImage or with Kinect?

thanks so much

Replies(1)

hi



basically a kinect-library gives to two arrays, one that holds the color (rgb-values) from the video-stream, and one that hold the depth-data from the depth-stream.

The video-stream is like a usual webcam-image, so theres no way around to do some frame-subtraction without rgb-values (or hsb values) as you already mentioned.

But the depth image is a result of raw values, in range from about 300 to 1200 ( the wide range is from 0 to 2048).
some detailed information about this is here: http://www.ros.org/wiki/kinect_node#Depth_Calibration

so, depending on the library you use, this raw depth-values are represented as gray, or hsb values.
some library will actually give you direct access to the raw depth values.
now its possible to do some frame-subtraction by using just the brightness-value.


Since all existing librarys are calculating real 3d-data ( x y z coordinates) from these raw-depth-values, you could also do some subtraction by using just the z-coordinate.




here's an (very simple) example from my library: http://forum.processing.org/topic/kinect-library-dlibs-freenect

https://github.com/diwi/dLibs/blob/dLibs/dLibs_freenect/examples/kinect_basic_framesubtraction_simple/kinect_basic_framesubtraction_simple.pde



its not perfect, but it shows the main point of it.

its also possible to use this z-coordinate / raw depth value, as a clipping plane.




thomas