Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

In this Discussion

Kinect for Windows V2 Library for Processing

2

Comments

  • Hey,

    I updated the library.

    I added a face tracking example, up to 6 users, with mood detection, I did not make a lot of test, so please send me feedback.

    Also added a skeleton depth and skeleton color map example,

    There is a bug where you cannot use face tracking and skeleton tracking in the same sketch, I'll try to fix this.

    Thomas

  • Hello

    Thanks a lot for your work

    I was very busy and didn't have the time to work with it, but I will work on it next week and I'll tell you if I see something weird.

  • I'm eager to also test it! I'm out on holiday for a week, but when I get back I'm making playing with your library a top priority! Cheers!

  • Congratulations for your work! Thanks so much.

    Your library works very well but I have some little problems: - I can get the RGB, depth and body images, but impossible to get the IR image. - Face tracking doesn't work.

    I use Processing 2.2.1 on a PC under Windows 8.1, with the Kinect for Windows 2. I installed the Kinect for Windows SDK 2.0 Public Preview, and not the Kinect 2.0 SDK Beta, maybe it is the source of the problem ?

  • hey stravingo, try installing the new SDK (september), As I know there are few reports that the IR in not working under some NVIDIA cards, dont know this was solved in the new update. Try the SDK examples, the IR and face detection.

    I also fixed the face detection and skeleton tracking issue, now you can use them both one a same sketch.

    Thomas

  • I already have the newest 2.0.1409.10000 version, published on 9/16/2014 :-( The official examples from the SDK give the same result: no IR picture and no face detection. I will try to update my graphic card drivers.

  • edited October 2014

    Hello !

    FaceTracking & Face-states works for me ! :) But it's very slow, maybe 10 frames / second (max). I tryed to optimize the code a bit but it's still very slow.

    Then I tryed to reduce the size of the kinect-screencapture and the performance are not fantastic but it runs significantly faster (around 20 FPS). the dimension of my screen-capture is 1920/8 x 1080/8

    The color1920 demo works perfectly (fast)

    Is it normal that face-tracking is so slow ? Do you think it's a kind of bug or something ?

    Thank you again for all you did !

    PS : the skeleton-color demo works incredibly well !

  • hey.

    Don't know why the faceTracking is slow, in my laptop it goes fast as the skeleton tracking, so it could be de sdk, or something else.

    Try to update the video card and the Kinect 2 SDK.

    Thomas

  • You're right ! I updated the driver of my graphic card and it works well.

    Thank you !

  • edited October 2014

    Hello !

    Do someone know the correct way to use the kinect.getRawDepth() function.

    The function works but instead of having a black&white picture with white color-pixel for nearest object and dark-color for the background, we get a weird "depth-structure" with a "repetition" of depth-levels from the camera to the background.

    It acts like depth-values were found from a "modulo function" ( % )

    Do you see what I mean ?

    I spent maybe 5-6 hours yesterday to convert these values into a "single-depth-range" but I didn't found the correct way to do it. It's probably obvious but I don't get it...

    EDIT : Actually, I can get it easyly if I divide the depth value from pointCloud by "depthVal" (in your pointcloud demo). But why are the rawDepth values made for ? Someone know ?

  • well, thats the new way that Microsoft implemented the depth value, An individual cycle from black to white represents a depth difference of 256 mm. Then, we repeat another cycle. thats for each "strip".

    If you need the raw data, you can activte, in the septup.

    activateRawDepth(true).

    and get it with

    int [] getRawDepth();

    the rawDepth is for you don't have to unpacking the data of PImage with loadPixels(), that is actually really slow, so if you need each individual pixels you can use:

    int [] getRawDepth()

    This is the same for the color image and the other ones.

    The rawData is in gray scale color, the library transforms the incoming depth bytes from the kinect into processing gray color, if you need the byte with no processing color conversion I could easily implement the function.

    here is how each byte is been converted to gray color with a colorByte2Int, that converters byte to processing format color.

    BYTE intensity = static_cast<BYTE>((depth >= nDepthMinReliableDistance) && (depth <= nDepthMaxReliableDistance) ? (depth % 256) : 0);
    colorByte2Int((uint32_t)intensity);
    

    the complete code, is in this repository

    https://github.com/ThomasLengeling/KinectPV2_BuildLibs/blob/master/Build/vc2012/KinectLib_V2.0/KinectPV2.cpp

    around line 389

    thomas

  • edited October 2014

    Hello ! Thank you - again - for your message !

    I'm not sure to understand very well

    "the rawDepth is for you don't have to unpacking the data of PImage with loadPixels(), that is actually really slow, so if you need each individual pixels you can use:"

    That's exactly why I would like to use it, but the "256mm cycle-strip-stuff" is not very easy to work with. I wrote some code to isolate one range but as I said, it's not easy to work with.

    At the contrary, it's very easy to use the pointcloud data. In an ideal way, it could be good to be able to get a float[] with only the depth values of the pointcloud data divided by the depth-threshold (=>normalized values). Pointcloud data works very fast if you don't try to render the pointcloud (and that's not what I want to do , I want to made some computation from the depth values and move only one point from them )

    "if you need the byte with no processing color conversion I could easily implement the function."

    I need either

    • a int[] with 0-255 int (that represents a classic grayscale picture without "cycle")
    • a function that allow the coder to define how long is the "cycle"
    • a float[] with normalized depth values
    • nothing because pointcloud-data give me all I want :)
  • ok.

    I can't change the depth visual representations, it just the way that the depth capture is implemented.

    I could implement the depth normalize values, it would just the depth frame divide by 255.

    float [] getRawDepthNormalized();

    what do you mean about the depth-threshold?, I could also normalize the point cloud.

    Thomas

  • edited October 2014

    "I could implement the depth normalize values, it would just the depth frame divide by 255."

    Actually, I can't figure out how to use correctly the "rawDepth", then using 255-value or normalised values would result to the same difficulty to use it because of the repetition of 256mm field instead of using a single field that contains everything (like the pointcloud).

    "what do you mean about the depth-threshold?, I could also normalize the point cloud."

    I think the best feature would be an array of float that contains only the normalized z positions from the point cloud ( x / y are useless because we know that all the data fit in a 512x424 screen )

    Right now, I use the pointcloudData but I need to target the "z position" like that

    "datas[index * 3 + 2]"

    Each value from the pointcloud is >= 0 && <= 2.3 .

    Then, to get the normalize z value of each pixel from my 512x424 depth-screen, I need to do

    "datas[index * 3 + 2] / 2.3;"

    Then, it could be great if I could get the z position simply like that datas[index].

    I'm sorry for my poor english, I hope you could understand me.

    Thank you again for your time

  • edited October 2014

    Hi Thomas,

    Thanks for all the work you've put into this library. We've been using the kinect360's a bit for a project at uni but I decided to get a Kinect One to try out instead so your library has been great.

    I'm also trying to get depth values similar to what fanthomas was asking about, so that would be something very handy to have for me also :) I'm using it to find objects for a robot to pick up. With the 360 kinect I was using BlobScanner on the depth image and would like to be able to do something like this with my Kinect One too.

    fanthomas, is there any chance you could explain a bit more on how you got the depth values using the point cloud? I've onyl been using processing for a couple of months so still strugling on a few things and haven't been able to figure that out.

    Thanks for your help to both of you, Josh.

  • Sure ! It's very simple ! Here is an example :

    import KinectPV2.*;
    import java.nio.FloatBuffer;
    
    int nbPixel;
    float depthRatio;
    PImage img;
    int[] depthPixels;
    
    KinectPV2 kinect;
    
    void setup(){
       size(512,424);
    
       kinect = new KinectPV2(this);
       kinect.enablePointCloud(true);
       kinect.init();
    
       depthRatio = 2.3;
       nbPixel = 512 * 424;
    
       img = new PImage(512,424);
       img.loadPixels();
       depthPixels = img.pixels;
    
    }
    
    
    void draw(){
      clear();
    
      kinect.setThresholdPointCloud(depthRatio);
      FloatBuffer pointCloudBuffer = kinect.getPointCloudPosFloatBuffer();
    
      for(int i=0;i<nbPixel;i++) img.pixels[i] =  color(parseInt((pointCloudBuffer.get(2+i*3)/depthRatio)*256)) ;
    
      img.updatePixels();
    
      image(img,0,0,512,424);  
    }
    
  • I could implement the point cloud as a PImage 512 x 424 where each pixel is the depth as the same the point cloud.

    and a float [] getPointCloudDepth()

    where you get the same results as the getRawDepth(), put with the pixel corresponding the point cloud. I this this is more useful,

  • sounds perfect for me :)


    Just by curiosity, can you tell me your framerate for your facetracking-demo on your computer ?

    Because my results are better now I updated my graphic driver but it still much slower than other demo. Is it normal to have less than 30 frame by second if I use facetracking (even if I don't check head-state) ?

    I think it's weird because when I see some facetracking demo with the first kinect, it runs very smoothly. I supposed it would run even better with KinectV2 but it's not what I experimented.

    http://channel9.msdn.com/posts/Kinect-for-Windows-SDK-15-Face-Tracking-Seated-Skeletal-Tracking-Kinect-Studio--More

  • Awesome thanks guys! should be able to get going now with that.

  • edited October 2014

    It could be great to be able to set the size of the video we get from kinect instead of always use a 1920x1080 PImage because it should be faster to resize it with C++ than Processing.

    It's not a big need, but I think it could be usefull.

    A 512x424 color output could be very usefull (I think it's a non-sense that Microsoft didn't provide it in their API)

  • Ok.

    I added a couple of function in how to obtain the depth values corresponding the point cloud. You can check out the pointCloudDepth Example.

    The faceTracking actually goes really fast on my machine, in processing is around 60fps, and it looks very smoothly, still don't implement the fps capture for each process, like getting the exact fps for each color, depth, IR, skeleton process.

    @tlecoz a resize function could be useful in a near future, also a resize function could be easily implemented using openCV library or the image function.

    Thomas

  • edited October 2014

    "I added a couple of function in how to obtain the depth values corresponding the point cloud. You can check out the pointCloudDepth Example."

    Great ! I'm going to check that ! Thank you !

    "The faceTracking actually goes really fast on my machine"

    The problem may come from my computer, don't know... Thank you for testing.

    "a resize function could be useful in a near future, also a resize function could be easily implemented using openCV library or the image function."

    Of course, it's not so complex to resize the picture but it take time to compute because the image is huge, and it could be better if the resize part was done in C++.

    Actually, I though about another great feature that should be really great to have from C++.

    Now we get the depth values corresponding to the pointcloud, it could be very usefull to get a int[] array that contains the sorted indexs of each pixel from the kinect device (in 3D space) to the background.

    For example, the first entry could be 12345. It represents the pixel at position

    x = 12345 % 512;
    y = 12345 / 512;
    

    But because it the first entry, it means it's the first pixel captured from kinect in 3D-space.

    In some cases, we don't need to know the depth value of the pixels but we only need to know a consistant bounding box of the closer objet from camera (tipically, the first 500 values could be interpret as noise, and the next 100 pixels would enclose a small surface stable enough to use it in our projects )

    It's not an emergency at all, but I'm sure it could be usefull

  • Sorry to ask that question, but is there something special to know if I want to use your lib inside Eclipse ?

    I have added the KinectPV2.jar to my build-path but when I try to compile, I get this weird error

    Exception in thread "Animation Thread" java.lang.UnsatisfiedLinkError: no Kinect20.Face in java.library.path
        at java.lang.ClassLoader.loadLibrary(Unknown Source)
        at java.lang.Runtime.loadLibrary0(Unknown Source)
        at java.lang.System.loadLibrary(Unknown Source)
        at KinectPV2.Device.<clinit>(Device.java:46)
        at Kinect_headControler.setup(Kinect_headControler.java:30)
        at processing.core.PApplet.handleDraw(PApplet.java:1579)
        at processing.core.PApplet.run(PApplet.java:1503)
        at java.lang.Thread.run(Unknown Source)
    
  • Hi guys I'm quite new to processing, kinect 2 and this library. I'm wondering if this library is able to give me the point cloud in x,y,z like the following example in kinect 1 using the SimpleOpeNNI library. Thanks in advance.

    import processing.opengl.*;
    import SimpleOpenNI.*;
    SimpleOpenNI kinect;
    void setup() {
     size(1024, 768, OPENGL);
     kinect = new SimpleOpenNI(this);
     kinect.enableDepth();
    }
    void draw() {
     background(0);
     kinect.update();
     // prepare to draw centered in x-y
     // pull it 1000 pixels closer on z
     translate(width/2, height/2, -1000); 1
     rotateX(radians(180)); // flip y-axis from "realWorld" 2
     stroke(255); 3
     // get the depth data as 3D points
     PVector[] depthPoints = kinect.depthMapRealWorld(); 4
     for(int i = 0; i < depthPoints.length; i++){
     // get the current point from the point array
     PVector currentPoint = depthPoints[i];
     // draw the current point
     point(currentPoint.x, currentPoint.y, currentPoint.z); 5
     }
    } 
    
  • edited October 2014

    @tlecoz I haven't tried the library on eclipse, but you need to create a lib folder in your eclipse project and add it the Java Build Path as a Native library location, in the folder that you created, you should add the .dlls files, the Kinect20.Face.dll, the KinectPV2.dll and also the NuiDatabase folder.

    @kiyuen check out the two examples of point cloud of the library.

    Thomas

  • Thank you Thomas !

  • Hi Thomas! first thanks for the great job you're doing. A simple question. Is there some way to registry the depth data to the color image in order to have correspondences between color image and depth image? Thanks for the help. Regards

    Andrea

  • Hey.

    Thanks for the interest, actually that function is not implemented, I'll keep that in mind in the next update. As I know that function is implemented in the SDK, so I could pass it to the lib.

    Thomas

  • Great work on the library. Thanks for sharing.

    +1 for the color image/depth image comparison.

    I see there is an example bundled with the api of background subtraction using this feature. It would be very useful!

  • Hey, Microsoft introduce THE kinect for xbox adapter : http://blogs.microsoft.com/blog/2014/10/22/microsoft-releases-kinect-sdk-2-0-new-adapter-kit/ It should works as the V2 for windows.

    Great job anyway !

  • Hello Thomas,

    I tryed your last release and the Face-Tracking demo works well now (30-40 FPS). The cloud-depth demo is a very usefull example. Thank you for that !

    Unfortunately, the skeleton-depth demo doesn't work as expected anymore. It works but not entirely... It's like the skeleton was detected but never tracked, then, maybe 10 times by seconds we can see the skeleton but just for one frame. It's hard to explain....

  • Waouw ! If I don't draw the video, face-tracking runs at 55 FPS on my laptop ! I can use it for my project now (it was not stable enought before, and I was developping my own facetracking actually, but there was some bug in my stuff and the official version would help me a lot with what I want to do ! )

    Thank you very very very much for all you did !

  • I'm glad that it's working for you!! I don't know why the skeleton-depth is working like that... are you using the new October SDK version?, I haven't tested the sketches with that version, I'll be available to check it out and make a next update with HDFaceTracking and color image/depth image comparison in about three weeks, Right now I don't have access to a kinect.

    Thomas

  • edited November 2014

    "are you using the new October SDK version?" Yes !

  • Any ideas how to map RGB + mask, so it looks like here: http://pterneas.com/2014/04/11/kinect-background-removal/

    Cool lib, by the way!

  • edited November 2014

    Hello !

    The "depth-view" is a 512x424 PImage. The "rgb-view" is a 1920x1080 PImage.

    Obviously, the dimension-ratio is not the same - it should be too easy ... -

    But you can crop and scale your "rgb-view" to get it match with your "depth view" Something like that

    int px;

    PImage video1920x1080; PImage video512x424;

    void setup(){
    
       float rgbW = 1920.0;
       float rgbH = 1080.0;
       float depthW = 512.0;
       float depthH  =424.0;
       float heightRatio = depthH / rgbH;
    
    
       rgbW *= heightRatio;
       float scaleRatio = 1920.0 / rgbW;
    
       float posx = -(depthW - rgbW)/2;
       posx *= scaleRatio;
    
       px = floor(posx);
    
       video1920x1080 = kinect.getColorImage();
       video512x424 = createImage(512,424);
    
    }
    
    void draw(){
       video512x424.copy(video1920x1080,px,0,1920-px*2,1080,0,0,512,424)
    }
    

    Once you get a 512x424 RGB image, you can easly grab the mask-depth data (based on a 512x424 picture too) and only draw the masked pixels.

    Good luck !

    PS : Can you please post the complete code if you get it work ? I think that a lot of people will ask for it again and again over the next years :)

  • IT's not working like this :( http://snag.gy/EeC5S.jpg There is coordinate mapping algorithm in default c# KInectV2 examples (http://snag.gy/BKeZx.jpg), but it's not visible, so no idea how to port it to Processing.

  • here's very innacurate brute force version ;) works only on fixed distance from kinect (use mouseWheel and mouseX, mouseY to map the mask (Press mouse to save values)

    import KinectPV2.*;
    KinectPV2 kinect;
    float ratio;
    
    void setup() 
    {
      size(1920, 1080, OPENGL);
      kinect = new KinectPV2(this);
      kinect.enableColorImg(true);
      kinect.enableBodyTrackImg(true);
      kinect.init();
      ratio=kinect.getColorImage().height / kinect.getBodyTrackImage().height;
      //ratio=2.8799992; // coment this and uncomment previous line
    }
    
    void draw() 
    {
      background(0);
      tint(255);
      image(kinect.getColorImage(), 0, 0);
      pushMatrix();
      tint(255,77);
      scale(ratio);
      image(kinect.getBodyTrackImage(), mouseX-width/2, mouseY-height/2);
      //image(kinect.getBodyTrackImage(), 1049-width/2, 518-height/2); // coment this and uncomment previous line
    
      popMatrix();
      fill(255, 0, 0);
      text(frameRate+"\n press mouse to print values", 50, 50);
    }
    
    void mousePressed()
    {
      println("x: "+ mouseX + "y: "+ mouseY + "ratio: "+ratio); // save values to console
    }
    
    void mouseWheel(MouseEvent event) 
    {
      float e = event.getCount()*.01; 
      ratio+=e;
    }
    
  • Hey The next update I will implement the functions for the RGB color frame + depth map

    Thomas

  • Hello Thomas,

    any plans for a rgb+depth mapping update?

  • hey, Updated the the library, with rgb+depth, but it still needs more functions. I also added a HDFaceVertex Example.

    FACE

    Thomas

  • Thomas, this awesome:) I was wondering if it's possible to optimize "greenscreen" effect by using image mask shader. Is it possible to get mapped rgb separately and use shader to mask it with kinect.getBodyTrackImage() ? This way I could use blur shader for the mask so the edges are smoother.

  • edited November 2014

    For Sure. I could break down all the process into more functions.

    I could something like this:

    PImage colorImg     =  kinect.getColorImage();
    PImage maskImg           =  kinect.getBodyTrackImage();
    PImage backgroundImg = loadImage( ...);
    
    PImage colordepth = kinect.getCoordinateMapper(colorImg, maskImg, backgroundImg);
    

    But it could be slow. Going to try it out. Have any suggestions?

  • I mean something like this:

    //setup
    maskShader = loadShader("mask.glsl");
    maskShader.set("mask", kinect.getBodyTrackImage(););
    
    //draw
    shader(maskShader);    
    image(kinect.getColorImage(), 0, 0, 1920, 1080);
    

    But, first, I want to understand the process of mapping mask and rgb together. I'll study your code on github and let you know.

    Thank you!

  • Hello Thomas! Great job. I would like to experiment some my old coding made for Kinect1 with the new one Kinect. I searched a lot in internet about the heartbeats and the muscular pressure, but they look like no open source code.

    Can your library reads this two skills? It will be very nice!

    Congratulation in the maintime!!

  • The heartbeat detection is not implemented in the SDK. So I'm planning to add the heartbeat detection to the library, but don't know when that will happen.

  • Yes, I read about it. Why Microsoft did not implement the heartbeat detection? What about the muscular pressure?

    I found the only one in internet that try to read heartbeats, but he did not share the code. https://k4wv2heartrate.codeplex.com/

    Maybe can help you.

    Do you have any alternative idea?

    Thanks!!

  • "Do you have any alternative idea?"

    You only have to analyse the HD video from the kinect. I don't think the heartrate-stuff is based on depth-data - even if depth-data is accurate, it's not enough to make a heartrate with it.

    The video is so big... Then any area will represents hundred of pixels, and it's enough to work with

    Good luck !

  • Yeha, I would like to use heartbeats in real time, but I saw that all these techniques are not in real-time and tough to do. What about muscular pressure?

  • " I saw that all these techniques are not in real-time " I'm surprised, it sounds really possible to me

Sign In or Register to comment.