We are about to switch to a new forum software. Until then we have removed the registration on this forum.
So , i´ve been developing an app on raspberry PI that uses kinect. I´ve discover that the only library that I can run on a raspberry PI for using kinect is "openKinect library" , but this library does not have skeleton or body mask features. I´ve discover that if you run processing on windows you have the "kinect4win" library wich has the skeleton and bodymask features.
Is there any way to adapt this features from that library? Or is there is one way that I can code those features my self?
i´ve managed to make the mask:depthtoRGB image as you can see on this forum : https://forum.processing.org/two/discussion/25414/how-align-textures-using-glsl-for-depth-and-rgb-of-kinect#latest
but mask is very very noisy, apart from that I would really need to use skeleton tracking on my app.
Answers
@Aurelian24 As far as I understand the situation: for the skeletal tracking one always had to use a closed-source "middleware" component, which implemented the all the many algorithms to get from a noisy depth map to high-level semantic features like detecting gestures and the like.
If I remember correctly, there was a Linux version for that available at some point also, but none for ARM processors.
If you want to search for open source reimplementations of this component, the keywords would be: NITE middleware alternative, open source high-level kinect NUI
Skeleton_Tracking
@totovr76 -- how specifically did you create this skeleton tracking demo -- with what hardware and software setup?
Instructions
I set up the instructions in this repository
Hardware
Software
What a wonderful resource, @totovr76 -- your SimpleOpenni repository should be extremely helpful for people trying to put Kinect skeleton tracking into production.