Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • PSurface On Start Intermittent Error

    Created a runnable jar file that starts up just fine most days but occasionally (1-2 times a week) will open to an error. The sketch itself is utilizing controlp5,osc,opencv and the KinectV2 library to create a blob tracking solution that sends the coordinates of a centroid through osc. It appears in the image below that all of these libraries load successfully then upon the thread starting it gets hung up on this error. From the way I read it, it seems that there is an issue with opengl in the sketch though I have been unable to recreate this error from within eclipse. It only occurs when using an exported jar even though I am running into no unsatisfied link errors. Happy to share parts of the code or my file structure if that helps, thx.

    java.lang.RuntimeException: Waited 5000ms for: <c389a00, 2ff75f7>[count 2, qsz 0, owner <main-FPSAWTAnimator#00-Timer0>] - <main-FPSAWTAnimator#00-Timer0-FPSAWTAnimator#00-Timer1> at processing.opengl.PSurfaceJOGL$2.run(PSurfaceJOGL.java:482) at java.lang.Thread.run(Unknown Source)

  • Kinect for Windows V2 Library for Processing

    I'm trying to get this lib working in Eclipse but i get the following error, here's what I've done:

    • Added the KinecPV2.jar to libraries
    • Referenced the lib folder as a native library location
    • Added the lib folder and everything within it to my project folder

    Any ideas?

    64 windows 10
    Loading KinectV2
    java.lang.NoClassDefFoundError: com/jogamp/common/nio/Buffers
        at KinectPV2.Device.<init>(Device.java:130)
        at KinectPV2.KinectPV2.<init>(KinectPV2.java:38)
        at UsingProcessing.setup(UsingProcessing.java:18)
        at processing.core.PApplet.handleDraw(PApplet.java:2393)
        at processing.awt.PSurfaceAWT$12.callDraw(PSurfaceAWT.java:1540)
        at processing.core.PSurfaceNone$AnimationThread.run(PSurfaceNone.java:316)
    Caused by: java.lang.ClassNotFoundException: com.jogamp.common.nio.Buffers
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 6 more
    
  • OpenNI2 Devices and Processing

    I think what you need is a device that is java supported. Processing will just be the sugar coating. Check the following posts that explores the relationship between Processing and Java:

    https://forum.processing.org/two/discussion/21599/processing-or-java#latest
    https://forum.processing.org/two/discussion/20622/processing-is#latest

    I let other people comment about kinect technologies. In the meantime, you can explore previous posts in the forum:

    https://forum.processing.org/two/search?Search=kinect
    https://forum.processing.org/two/search?Search=kinectV2
    https://forum.processing.org/two/search?Search=openNI

    You can check Shiffman's channel as he has a set of videos (5) related to kinect and depth mapping:
    http://shiffman.net/p5/kinect/

    Kf

  • Get Depth Value for Each Pixel

    One can see the aspect ratio of the TVs is not the same. Are you making sure you are dealing with the same aspect ration in your processes?

    What are the values of KinectPV2.WIDTHDepth and KinectPV2.HEIGHTDepth? Do they match the image dimension?

    Lines 27 to 31 can be changed to

     for (int i = 0; i < KinectPV2.WIDTHDepth* KinectPV2.HEIGHTDepth; i++) 
          depthZero[i] = 0;    
    

    Notice there are no hard-coded values there.

    Now, I am not an expert in kinetic and not familiar with the different values you are outputting there. However, I see the following discrepancy:

    Your depthToColorImg is 512 x 424 or a size of 217088

    What you get from kinetic is 434176 which is twice the value above. It must be because of the x/y data pair values. But then in line 67 you normalize it to your screen resolution. Why? From what I see, the object mapDCT is made of 2*(512x424) pixels (so are depthRaw and colorRaw but not relevant here).

    Do you have a link to the documentation of your module? You should posted. Also cross link to any previous references: https://forum.processing.org/two/discussion/21624/combine-rgb-data-with-body-track-to-only-show-rgb-of-bodies-kinectv2-kinectpv2-library#latest

    Kf

  • Combine RGB data with Body Track to only show RGB of Bodies (KINECTV2/KINECTPV2 LIBRARY)

    I am using Thomas Lengeling's KinectPV2 library to try to get processing to save/display only the RGB data of the bodies it detects.

    Right now when it runs it displays the color image not filtered through body data, like this:

    sketch_170326b 26-Mar-17 21_21_27

    Regardless of whether or not a body is tracking in the space

    When I want the finished projected product to look more like this:

    goals

    Here's vaguely what I've tried (by referencing the pixels in the body track, based on Thomas Lengeling's examples for depth to color & body track users:

    //source code from Thomas Lengeling
    
    import KinectPV2.*;
    
    KinectPV2 kinect;
    
    PImage body;
    
    PImage bodyRGB;
    
    int loc;
    
    void setup() {
      size(512, 424, P3D);
    
      //bodyRGB = createImage(512, 424, PImage.RGB); //create empty image to hold color body pixels
    
      kinect = new KinectPV2(this);
    
      kinect.enableBodyTrackImg(true); 
      kinect.enableColorImg(true); 
      kinect.enableDepthMaskImg(true);
    
      kinect.init();
    }
    
    void draw() { 
      background(255);
    
      body = kinect.getBodyTrackImage(); //put body data in variable
    
      bodyRGB = kinect.getColorImage(); //load rgb data into PImage
    
      //println(bodyRGB.width); 1920x1080 //println(bodyRGB.height);
    
      PImage cpy = bodyRGB.get();
    
      cpy.resize(width, height);
    
      //int [] colorRaw = kinect.getRawColor(); //get the raw data from depth and color
    
      //image(body,0,0); //display body
    
      loadPixels(); //load sketch pixels
      cpy.loadPixels();//load pixels to store rgb
      body.loadPixels(); //load body image pixels
    
      //create an x, y nested for loop for pixel location
      for (int x = 0; x < body.width; x++ ) { 
        for (int y = 0; y < body.height; y++ ) { 
          //pixel location 
          loc = x + y * body.width; 
          if (color(body.pixels[loc]) != 255) {
            color temp = color(cpy.pixels[loc]); 
            pixels[loc] = temp;
          }
        }
      }
    
      //cpy.updatePixels(); //body.updatePixels(); 
    
      updatePixels();
    
      //image(cpy, 0, 0);
    }
    
  • Kinect Physics Example updated for Processing 3 and openKinect library

    Thanks for sharing. I recently tried paid for a CA membership just to follow that tutorial and have been pulling out my hair with no results to show for it. I'll take a look at what you've done and hopefully it works.

    Please let me know if you ever update this for KinectV2. I'll try to hack away at it, but I fear my nascent Processing skills aren't up to the task.

    I get a "No Kinect devices found." error when I run the code and it's fine when I run code specifically for KinectV2. I'll try my best, but...

  • Interacting and play with different shapes using Kinect and processing ?

    Hey @kfrajer this is a good read for the Kinect with Processing if you are on Mac: http://shiffman.net/p5/kinect/

    Though it doesn't have a lot of the functionality as e.g. KinectV2 for Windows.

    You might be able to use the depth sensing colour example and and a 2D collision library to get the effect you are after.

  • Kinect Logo Tracking

    Hello all,

    I have a project to live track a guitarist with kinect and from the neck of the guitar to emit some particles. I do not know if this is possible whit Kinectv2 and Processing? I was thinking to use a Logo or QR Code as a tracker on the end of the guitar so the kinect should track only that image and emit from that point.

    There is any lib for this idea? Any suggestions?

    Thank you

  • Error running OpenKinect example sketch Processing 3.2.1: Java.Lang.Unsatisfied Link Error

    So I installed Processing 3.2.1 on my Raspberry Pi 3 Model B and imported the OpenKinect library from the Add library. I run the example RGBDepthTest2 for both Kinectv1 and Kinectv2 (I have the 1414 Kinect) and I get the same error:

    "The sketch has been automatically resized to fit the screen resolution not compatible with the current OS or is a 32 bit system
    java.lang.UnsatisfiedLinkError: org.openkinect.freenect2.Device.jni()J at
    processing.opengl.PSurfaceJOGLS2.run(PSurfaceJOGL.java:457) 
    at java.lang.Thread.run(Thread.java.745)
    Unsatisfied Link Error: org.openkinect.freenect2.Device.jniInit(0J
    A library relies on native code that's not available.
    Or only works properly when the sketch is run as a 64-bit application."
    
  • OpenCV with OpenKinect and Xbox Kinect V2

    hi everyone,

    i am trying build a face detection with the opencv and openkinect libraries. for the image input i want to use the xbox kinect v2. i am basing my code on the face detection example.

    this is my code so far:

    import gab.opencv.*;
    import java.awt.Rectangle;
    
    /* KINECT */
    import org.openkinect.freenect.*;
    import org.openkinect.freenect2.*;
    import org.openkinect.processing.*;
    
    OpenCV opencv;
    Kinect2 kinect2;
    
    Rectangle[] faces;
    
    void setup() {
      opencv = new OpenCV(this, 640/2, 480/2);
      size(640, 480);
      // Kinectv2
      kinect2 = new Kinect2(this);
      kinect2.initVideo();
      kinect2.initDevice();
      
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
      faces = opencv.detect();
    }
    
    void draw() {
      opencv.loadImage(kinect2.getVideoImage());
      image(kinect2.getVideoImage(), 0, 0, 640, 480);
    
      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);
      for (int i = 0; i < faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }
    

    the problem seems to be in the line "opencv.loadImage(kinect2.getVideoImage());" since the detection does not work. when working with the isight camera (using the build-in function "capture" and "video"-add-on) instead of kinect everything works perfectly fine.

    can anyone help?

  • getting started w/kinect

    You are welcome.

    If you are on Windows I never manager the Xbox360 model (KinectV1), the best now is KinectV2 that has the official Microsoft API and a good support for all the platforms (Processing as well) and is way better than the first model (either for 3d scanning, skeleton tracking, point clouds). The bad thing is that it works only on Windows (support for Linux and Mac is not so good for now)

    Unfortunately I never manager KinectV1 (1414 neither) model on Windows, but if you are on Linux you can get easy the drivers (on Ubuntu there is a repo that permit you to install all the drivers with aptitude)

    https://launchpad.net/~eighthave/+archive/ubuntu/openni

  • getting started w/kinect

    Take a look here, it seems that the guys saved the libraries for Mac on his Mega (Apple closed every source for the open source drivers): https://creativevreality.wordpress.com/2016/01/26/setting-up-the-kinect-on-osx-el-capitan/

    After this you will be able to use openKinect by Shiffman.

    And for the version of Kinect, be sure to buy the 1414 model of the Xbox360, the other model doesn't work and for the KinectV2 drivers aren't working good at the moment (maybe in the future)

  • serial input slow

    Hello,

    I haven't posted before, but I have been lurking for a while. I am a micro-controller based hobbyist... no formal engineering education.

    First, I would like to thank the wonderful people at Processing.org for this wonderful platform. Truly wonderful. I am using Processing 3 on a Win 10 Kangaroo ($100), but I have a real Win 8 computer and it shows the same thing:)

    I am using Processing (Daniel Shiffman's libraries) to take data from a KinectV2 into the Kangaroo and pass it along to Parallax's FPGA Propeller 2 test platform (P123 Cyclone V A9). I am getting 3MBaud going out to the microcontroller, but I have to put wait states in the TX code of the Propeller.... same bit rate, but waiting between bytes. The wait is so extreme that I am effectively getting around 64Kbaud coming back into the computer. As long as I do this, everything is dandy, but if I try to limit the wait between bytes, bytes get missed. I can handshake around this, so if that is what it is, I am fine with it.

    I had some logic in the serial event routine, but I moved it out and the result was nearly the same... but in the process I created other issues... so I put it back. I am confident that this isn't my overhead in the serial event routine, but I am wondering if the size and complexity of the rest of my program could possibly cause problems in the event handler?

    My gut feeling is that there should be no relationship... but on technical issues, my gut isn't as good as it used to be:)

    Thanks,

    Rich

  • Mapping KinectV2 depth to rgb DSLR

    Hi,

    I am trying to map the depth from the Kinectv2 to RGB space from a DSLR camera and I am stuck with weird pixel mapping.

    I am working on Processing, using OpenCV and Nicolas Burrus' method where :

    P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d
    P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d
    P3D.z = depth(x_d,y_d)
    
    P3D' = R.P3D + T
    P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb
    P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb
    

    Unfortunatly i have a problem when I reproject 3D point to RGB World Space. In order to check if the problem came from my OpenCV calibration I used MRPT Kinect & Setero Calibration in order to get the intrinsics and distorsion coefficients of the cameras and the rototranslation relative transformation between the two cameras.

    Here my datas :

    depth c_x = 262.573912;
    depth c_y = 216.804166;
    depth f_y = 462.676558;
    depth f_x = 384.377033;
    depthDistCoeff = {
        1.975280e-001, -6.939150e-002, 0.000000e+000, -5.830770e-002, 0.000000e+000
      };
    
    
    DSLR c_x_R = 538.134412;
    DSLR c_y_R = 359.760525;
    DSLR f_y_R = 968.431461;
    DSLR f_x_R = 648.480385;
    rgbDistCoeff = {
        2.785566e-001, -1.540991e+000, 0.000000e+000, -9.482198e-002, 0.000000e+000
      };
    
    R = {
        8.4263457190597e-001, -8.9789363922252e-002, 5.3094712387890e-001,
        4.4166517232817e-002, 9.9420220953803e-001, 9.8037162878270e-002,
        -5.3667149820385e-001, -5.9159417476295e-002, 8.4171483671105e-001 
      };
    
    T = {-4.740111e-001, 3.618596e-002, -4.443195e-002};
    

    Then I use the data in processing in order to compute the mapping using : `PVector pixelDepthCoord = new PVector(i * offset_, j * offset_); int index = (int) pixelDepthCoord .x + (int) pixelDepthCoord .y * depthWidth; int depth = 0;

        if (rawData[index] != 255)
        {
          //2D Depth Coord
          depth = rawDataDepth[index];
        } else
        {
        }
    
        //3D Depth Coord - Back projecting pixel depth coord to 3D depth coord
        float bppx = (pixelDepthCoord.x - c_x) * depth / f_x;
        float bppy = (pixelDepthCoord.y - c_y) * depth /  f_y;
        float bppz = -depth;
    
        //transpose 3D depth coord to 3D color coord
        float x_ =(bppx * R[0] + bppy * R[1] + bppz * R[2]) + T[0];
        float y_ = (bppx * R[3] + bppy * R[4] + bppz * R[5]) + T[1];
        float z_ = (bppx * R[6] + bppy * R[7] + bppz * R[8]) + T[2];
    
        //Project 3D color coord to 2D color Cood
        float pcx = (x_ * f_x_R / z_) + c_x_R;
        float pcy = (y_ * f_y_R / z_) + c_y_R;`
    

    Then i get the following transformations :

    I think i may have a probleme in my method. Does anyone has any ideas or a clues? I am racking my brain since many days on this problem ;)

    Thanks

  • How do i combine interactive PCA with KinectV2 using OpenCV for Processing?

    Hi,

    I'm currently trying to get Greg Borenstein's interactive PCA analysis example to work with a KinectV2 stream using Greg's newer OpenCV for Processing library (had to many issues with the old library). I'm getting output from the Kinect and processing it with OpenCV successfully but I have two issues: 1. The old opencv.image() function doesn't work with the new library so I'm trying opencv.getOutput(). Unfortunately this slows performance considerably and imagine there is a better solution. 2. I'm expecting an axis to be drawn on the OpenCV output image but this isn't occurring.

    Here's my code including the commented code I'm trying to replace:

    //Copyright (C) 2014  Thomas Sanchez Lengeling.
    //KinectPV2, Kinect one library for processing
    import KinectPV2.*;
    //Import OpenCV for Processing Library by Greg Borenstein
    import gab.opencv.*;
    //Import Processing-PCA Library from Greg Borenstein
    import pca_transform.*;
    //Import JAMA Java Matrix package
    import Jama.Matrix;
    
    //Declare kinect object
    KinectPV2 kinect;
    //Declare opencv object
    OpenCV opencv;
    //Declare PCA object
    PCA pca;
    
    //Declare image variables (colour image)
    PImage img;
    int imgWidth = 640;
    int imgHeight = 360;
    
    //Declare variable for Threshold
    int threshold = 100;
    
    //Declare PVector variables
    PVector axis1;
    PVector axis2;
    PVector mean;
    PVector centroid;
    
    void setup() {
      size(1280, 720);
    
      //Initialise the Kinect including startup methods to enable colour, depth and infrared images
      kinect = new KinectPV2(this);
      kinect.enableColorImg(true);
      kinect.enableDepthImg(true);
      kinect.enableInfraredImg(true);
      kinect.init();
    
      //Initialise openCV for colour image
      opencv = new OpenCV(this, 1920, 1080);
    
      //Initialise centroid
      centroid = new PVector();
    }
    
    //Use Java Matrix for PCA
    Matrix toMatrix(PImage img) {
      ArrayList<PVector> points = new ArrayList<PVector>();
      for (int x = 0; x < img.width; x++) {
        for (int y = 0; y < img.height; y++) {
          int i = y*img.width + x;
          if (brightness(img.pixels[i]) > 0) {
            points.add(new PVector(x, y));
          }
        }
      }
    
      Matrix result = new Matrix(points.size(), 2);
    
      float centerX = 0;
      float centerY = 0;
    
      for (int i = 0; i < points.size(); i++) {
        result.set(i, 0, points.get(i).x);
        result.set(i, 1, points.get(i).y);
    
        centerX += points.get(i).x;
        centerY += points.get(i).y;
      }
      centerX /= points.size();
      centerY /= points.size();
      centroid.x = centerX;
      centroid.y = centerY;
    
      return result;
    }
    
    //**This code is not required**
    //void imageInGrid(PImage img, String message, int row, int col) {
      //int currX = col*img.width;
      //int currY = row*img.height;
      //image(img, currX, currY);
      //fill(255, 0, 0);
      //text(message, currX + 5, currY + imgHeight - 5);
    //}
    
    void draw() {
      background(0);
    
      //Draw the Kinect images
      image(kinect.getColorImage(), 0, 0, imgWidth, imgHeight);
      image(kinect.getDepthImage(), 855, 0, 425, 320);
      image(kinect.getInfraredImage(), 855, 321, 425, 320);
    
      fill(255, 0, 0);
    
      //**FUNCTIONS DO NOT EXIST - This code is rewritten below**
    //  opencv.read();
    //  opencv.convert(GRAY);
    //  imageInGrid(opencv.image(), "GRAY", 0, 0);
    //
    //  opencv.absDiff();
    //  imageInGrid(opencv.image(), "DIFF", 0, 1);
    //
    //  opencv.brightness(60);
    //  imageInGrid(opencv.image(), "BRIGHTNESS: 60", 0, 2);
    //
    //  opencv.threshold(40);
    //  imageInGrid(opencv.image(), "THRESHOLD: 40", 0, 3);
    //
    //  opencv.contrast(120);
    //  imageInGrid(opencv.image(), "CONTRAST: 120", 1, 3);
    
      //Load the colour image from the kinect to OpenCV 
      opencv.loadImage(kinect.getColorImage());
      //Apply grey filter
      opencv.gray();
      //Apply Threshold
      opencv.threshold(threshold);
      //Display OpenCV output
      PImage img = opencv.getOutput();
      image(img, 0, imgHeight, imgWidth, imgHeight);
    
        //**This code is rewritten below**
    //  Matrix m = toMatrix(opencv.image());
    
        //Add OpenCV output to Matrix **This slows down the sketch considerably**
        Matrix m = toMatrix(opencv.getOutput());
    
      if (m.getRowDimension() > 0) {
        pca = new PCA(m);
        Matrix eigenVectors = pca.getEigenvectorsMatrix();
    
        axis1 = new PVector();
        axis2 = new PVector();
        if (eigenVectors.getColumnDimension() > 1) {
    
          axis1.x = (float)eigenVectors.get(0, 0);
          axis1.y = (float)eigenVectors.get(1, 0);
    
          axis2.x = (float)eigenVectors.get(0, 1);
          axis2.y = (float)eigenVectors.get(1, 1);  
    
          axis1.mult((float)pca.getEigenvalue(0));
          axis2.mult((float)pca.getEigenvalue(1));
        }
    
        //**This code is rewritten below** 
        //image(opencv.image(), 0, opencv.image().height, opencv.image().width*3, opencv.image().height*3);
    
        image(opencv.getOutput(), 0, opencv.getOutput().height, opencv.getOutput().width*3, opencv.getOutput().height*3);
    
        stroke(200);
        pushMatrix();
        translate(0, imgHeight);
        scale(3, 3);
    
        translate(centroid.x, centroid.y);
    
        stroke(0, 255, 0);
        line(0, 0, axis1.x, axis1.y);
        stroke(255, 0, 0);
        line(0, 0, axis2.x, axis2.y);
    
        popMatrix();
        fill(0, 255, 0);
        text("PCA Object Axes:\nFirst two principle components centered at blob centroid", 10, height - 20);
      }
    }
    
    //Adjust threshold variable with 'A' and 'S' keys
    void keyPressed() {
      if (key == 'a') {
        threshold+=1;
      }
      if (key == 's') {
        threshold-=1;
      }
    }
    
  • Kinect for Windows V2 Library for Processing

    I get this error on my windows 8.1 64 bit machine for simple face tracking code

    64 windows 8 Loading KinectV2 Creating Kinect object ... ENABLE COLOR FRAME ENABLE INFRARED FRAME ENABLE SKELETON SETTING FACE TRACKING Done init Kinect v2 Version: 0.7.2 EXIT Clossing kinect V2 #

    A fatal error has been detected by the Java Runtime Environment:

    #

    EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ffa7cd86009, pid=30700, tid=28272

    #

    JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)

    Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode windows-amd64 compressed oops)

    Problematic frame:

    C [KinectPV2.dll+0x6009]

    #

    Failed to write core dump. Minidumps are not enabled by default on client versions of Windows

    #

    An error report file with more information is saved as:

    D:\Design\Assignments\Sem 6\TUI\India HCI\processing-2.2.1-windows64\processing-2.2.1\hs_err_pid30700.log

    #

    If you would like to submit a bug report, please visit:

    http://bugreport.sun.com/bugreport/crash.jsp

    The crash happened outside the Java Virtual Machine in native code.

    See problematic frame for where to report the bug.

    # Could not run the sketch (Target VM failed to initialize). For more information, read revisions.txt and Help ? Troubleshooting.

  • Need ideas to represent our project (Face expressions)

    Hello !

    Face-tracking suppose a very intensive work for the CPU and I think (but I may be wrong) that the faceTracking from OpenCV is not fast enough for what you want to do (it also depends on your computer...)

    Instead, you should use KinectV2 (200$) , it allow multiple faceTracking in real time and detect automatically the changes in face expression. A working example of faceTracking for KinectV2 is given with this lib, you have nothing to do, it's all done.

    https://github.com/ThomasLengeling/KinectPV2

    Good luck

  • Kinect for Windows V2 Library for Processing

    IT's not working like this :( http://snag.gy/EeC5S.jpg There is coordinate mapping algorithm in default c# KInectV2 examples (http://snag.gy/BKeZx.jpg), but it's not visible, so no idea how to port it to Processing.

  • Kinect for Windows V2 Library for Processing

    sounds perfect for me :)


    Just by curiosity, can you tell me your framerate for your facetracking-demo on your computer ?

    Because my results are better now I updated my graphic driver but it still much slower than other demo. Is it normal to have less than 30 frame by second if I use facetracking (even if I don't check head-state) ?

    I think it's weird because when I see some facetracking demo with the first kinect, it runs very smoothly. I supposed it would run even better with KinectV2 but it's not what I experimented.

    http://channel9.msdn.com/posts/Kinect-for-Windows-SDK-15-Face-Tracking-Seated-Skeletal-Tracking-Kinect-Studio--More

  • Kinect for Windows V2 Library for Processing

    Actually, I have an issue right now... I would like to know if the mouth of the body is open or not (I'm trying to build a tool for a tetraplegic). I know KinectV2 support this feature but when I try to process Kjoint.getState() with the head-joint, I always get 2.

    Any idea ?

    Thanks by advance !