Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • Kinect, Processing, OpenNI

    Hi, Pls HELP! Can someone offer any tips on how to download the openNI software on the newest version of OS? We want to make something beautiful with processing and kinect. Love

  • OpenNI2 Devices and Processing

    I think what you need is a device that is java supported. Processing will just be the sugar coating. Check the following posts that explores the relationship between Processing and Java:

    https://forum.processing.org/two/discussion/21599/processing-or-java#latest
    https://forum.processing.org/two/discussion/20622/processing-is#latest

    I let other people comment about kinect technologies. In the meantime, you can explore previous posts in the forum:

    https://forum.processing.org/two/search?Search=kinect
    https://forum.processing.org/two/search?Search=kinectV2
    https://forum.processing.org/two/search?Search=openNI

    You can check Shiffman's channel as he has a set of videos (5) related to kinect and depth mapping:
    http://shiffman.net/p5/kinect/

    Kf

  • Kinect Depth Threshold

    Thanks for the advice, Kf!

    Basically what I did is use the existing DepthMap3D example from OpenNI. The example gives me a visualization of 3D pointcloud. I then modify it with the method I learnt from Shiffman's tutorial on how to create Depth Threshold.

    Shiffman's tutorial used on a different library (OpenKinect) which couldn't work with my kinect. So what I tried to do is copy Shiffman's code into the OpenNI example, then changed some of the code to match the OpenNI library. I'm guessing it's the process when I change the codes create errors.

    error

    Below is my code:

    
    import SimpleOpenNI.*;
    
    SimpleOpenNI context;
    
    PImage img;
    
    // SIZES
    int canvasWidth  = 512;
    int canvasHeight = 424;
    
    int kinectWidth  = 512;
    int kinectHeight = 424;
    
    
    
    void setup()
    {
      size(512,424);
    
      context = new SimpleOpenNI(this);
      if(context.isInit() == false)
      {
         println("Can't init SimpleOpenNI, maybe the camera is not connected!"); 
         exit();
         return;  
      }
      
      // disable mirror
      context.setMirror(false);
    
      // enable depthMap generation 
      context.enableDepth();
    
    img  = createImage(kinectWidth, kinectHeight, RGB); 
      
    }
    
    void draw()
    {
    
    
      background(0,0);
      
      img.loadPixels();
    
    
      int[]   depthMap = context.depthMap();
    
    
    
      // draw pointcloud
    
      for(int y=0;y < context.depthHeight();y++)
      {
        for(int x=0;x < context.depthWidth();x++)
        {
        
          int offset =  x + y * context.depthWidth();
          int d = depthMap[offset];
          
          if (d > 300 && d < 1500) {
          img.pixels[offset] = color(255, 0, 150);
          } else {
            img.pixels[offset] = color(0);
        
          { 
    
          }
        
        }
      } 
    
      img.updatePixels();
      image (img,0,0);
    
    }
    
    }
    
    
  • Kinect Depth Threshold

    You could start describing what errors you get in the code that you tried before. What changes did you make, or what did you try to debug it? I will suggest exploring more previous posts related to your device:

    https://forum.processing.org/two/search?Search=openNI
    https://forum.processing.org/two/search?Search=Kinect

    Kf

  • Kinect Depth Threshold

    Hi all I am working on a project using Processing, Kinect and OpenNI.

    I want to create a Kinect Depth Image to create a distance threshold. Ie. Pixels within 1m are white, any beyond that are black. I am new to Processing and I am struggling a bit to make the code work.

    I found out one of the previous post asked similar problems (https://forum.processing.org/one/topic/kinect-depth-thresholdi.html) but the code wasn't working when I run it/modify it.

    Does anyone has experience on this and can give me a hand? Thanks a lot!

  • SimpleOpenNI Library error occurs after update processing

    I'm having the same problem. Could you please say where to download this new version of SimpleOpenNI. I've downloaded it from this page (https://code.google.com/archive/p/simple-openni/wikis/Installation.wiki) no luck still getting errors when the library is called. The link on this page (https://simple-openni.googlecode.com/svn/trunk/SimpleOpenNI-2.0/dist/all/SimpleOpenNI.zip) simply goes to an error page. And everywhere I look no one's really giving straight answers and links.

  • How to implement spout 2.05 in 'processing ' windowsX?

    Forgive the very basic nature of this query but i am very new to processing (and indeed programming). I am trying to use kinect 1414 with processing 2.2.1 and isadora acording to tutorial http://troikatronix.com/support/kb/kinect-tutorial-part2/, Since the tutorial spout has been upgraded and I am trying without success to change code according to recommendations for 2.05 https://github.com/leadedge/SpoutProcessing/releases. I have imprted spout library. The original code for processing sketch is below. /* -------------------------------------------------------------------------- * SimpleOpenNI User Test * -------------------------------------------------------------------------- * Processing Wrapper for the OpenNI/Kinect 2 library * http://code.google.com/p/simple-openni * -------------------------------------------------------------------------- * prog: Max Rheiner / Interaction Design / Zhdk / http://iad.zhdk.ch/ * date: 12/12/2012 (m/d/y) * ---------------------------------------------------------------------------- */

    import SimpleOpenNI.*;

    PGraphics canvas; color[] userClr = new color[] { color(255, 0, 0), color(0, 255, 0), color(0, 0, 255), color(255, 255, 0), color(255, 0, 255), color(0, 255, 255) };

    PVector com = new PVector();
    PVector com2d = new PVector();

    // -------------------------------------------------------------------------------- // CAMERA IMAGE SENT VIA SPOUT // -------------------------------------------------------------------------------- int kCameraImage_RGB = 1; // rgb camera image int kCameraImage_IR = 2; // infra red camera image int kCameraImage_Depth = 3; // depth without colored bodies of tracked bodies int kCameraImage_User = 4; // depth image with colored bodies of tracked bodies

    int kCameraImageMode = kCameraImage_User; // << Set thie value to one of the kCamerImage constants above

    // -------------------------------------------------------------------------------- // SKELETON DRAWING // -------------------------------------------------------------------------------- boolean kDrawSkeleton = true; // << set to true to draw skeleton, false to not draw the skeleton

    // -------------------------------------------------------------------------------- // OPENNI (KINECT) SUPPORT // --------------------------------------------------------------------------------

    import SimpleOpenNI.*; // import SimpleOpenNI library

    SimpleOpenNI context;

    private void setupOpenNI() { context = new SimpleOpenNI(this); if (context.isInit() == false) { println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return; }

    // enable depthMap generation 
    context.enableDepth();
    context.enableUser();
    
    // disable mirror
    context.setMirror(false);
    

    }

    private void setupOpenNI_CameraImageMode() { println("kCameraImageMode " + kCameraImageMode);

    switch (kCameraImageMode) {
    case 1: // kCameraImage_RGB:
        context.enableRGB();
        println("enable RGB");
        break;
    case 2: // kCameraImage_IR:
        context.enableIR();
        println("enable IR");
        break;
    case 3: // kCameraImage_Depth:
        context.enableDepth();
        println("enable Depth");
        break;
    case 4: // kCameraImage_User:
        context.enableUser();
        println("enable User");
        break;
    }
    

    }

    private void OpenNI_DrawCameraImage() { switch (kCameraImageMode) { case 1: // kCameraImage_RGB: canvas.image(context.rgbImage(), 0, 0); // println("draw RGB"); break; case 2: // kCameraImage_IR: canvas.image(context.irImage(), 0, 0); // println("draw IR"); break; case 3: // kCameraImage_Depth: canvas.image(context.depthImage(), 0, 0); // println("draw DEPTH"); break; case 4: // kCameraImage_User: canvas.image(context.userImage(), 0, 0); // println("draw DEPTH"); break; } }

    // -------------------------------------------------------------------------------- // OSC SUPPORT // --------------------------------------------------------------------------------

    import oscP5.*; // import OSC library import netP5.*; // import net library for OSC

    OscP5 oscP5; // OSC input/output object NetAddress oscDestinationAddress; // the destination IP address - 127.0.0.1 to send locally int oscTransmitPort = 1234; // OSC send target port; 1234 is default for Isadora int oscListenPort = 9000; // OSC receive port number

    private void setupOSC() { // init OSC support, lisenting on port oscTransmitPort oscP5 = new OscP5(this, oscListenPort); oscDestinationAddress = new NetAddress("127.0.0.1", oscTransmitPort); }

    private void sendOSCSkeletonPosition(String inAddress, int inUserID, int inJointType) { // create the OSC message with target address OscMessage msg = new OscMessage(inAddress);

    PVector p = new PVector();
    float confidence = context.getJointPositionSkeleton(inUserID, inJointType, p);
    
    // add the three vector coordinates to the message
    msg.add(p.x);
    msg.add(p.y);
    msg.add(p.z);
    
    // send the message
    oscP5.send(msg, oscDestinationAddress);
    

    }

    private void sendOSCSkeleton(int inUserID) { sendOSCSkeletonPosition("/head", inUserID, SimpleOpenNI.SKEL_HEAD); sendOSCSkeletonPosition("/neck", inUserID, SimpleOpenNI.SKEL_NECK); sendOSCSkeletonPosition("/torso", inUserID, SimpleOpenNI.SKEL_TORSO);

    sendOSCSkeletonPosition("/left_shoulder", inUserID, SimpleOpenNI.SKEL_LEFT_SHOULDER);
    sendOSCSkeletonPosition("/left_elbow", inUserID, SimpleOpenNI.SKEL_LEFT_ELBOW);
    sendOSCSkeletonPosition("/left_hand", inUserID, SimpleOpenNI.SKEL_LEFT_HAND);
    
    sendOSCSkeletonPosition("/right_shoulder", inUserID, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
    sendOSCSkeletonPosition("/right_elbow", inUserID, SimpleOpenNI.SKEL_RIGHT_ELBOW);
    sendOSCSkeletonPosition("/right_hand", inUserID, SimpleOpenNI.SKEL_RIGHT_HAND);
    
    sendOSCSkeletonPosition("/left_hip", inUserID, SimpleOpenNI.SKEL_LEFT_HIP);
    sendOSCSkeletonPosition("/left_knee", inUserID, SimpleOpenNI.SKEL_LEFT_KNEE);
    sendOSCSkeletonPosition("/left_foot", inUserID, SimpleOpenNI.SKEL_LEFT_FOOT);
    
    sendOSCSkeletonPosition("/right_hip", inUserID, SimpleOpenNI.SKEL_RIGHT_HIP);
    sendOSCSkeletonPosition("/right_knee", inUserID, SimpleOpenNI.SKEL_RIGHT_KNEE);
    sendOSCSkeletonPosition("/right_foot", inUserID, SimpleOpenNI.SKEL_RIGHT_FOOT);
    

    }

    // -------------------------------------------------------------------------------- // SPOUT SUPPORT // --------------------------------------------------------------------------------

    Spout server;

    private void setupSpoutServer(String inServerName, int inWidth, int inHeight) { // Create syhpon server to send frames out. server = new Spout();

    server.initSender(inServerName, inWidth, inHeight);
    

    }

    // -------------------------------------------------------------------------------- // EXIT HANDLER // -------------------------------------------------------------------------------- // called on exit to gracefully shutdown the Syphon server private void prepareExitHandler() { Runtime.getRuntime().addShutdownHook( new Thread( new Runnable() { public void run () { try { // if (server.hasClients()) { server.closeSender(); // } } catch (Exception ex) { ex.printStackTrace(); // not much else to do at this point } } } ) ); }

    // -------------------------------------------------------------------------------- // MAIN PROGRAM // -------------------------------------------------------------------------------- void setup() { int canvasWidth = 640; int canvasHeight = 480;

    size(canvasWidth, canvasHeight, P3D);
    canvas = createGraphics(canvasWidth, canvasHeight, P3D);
    

    textureMode(NORMAL);

    println("Setup Canvas");
    
    // canvas.background(200, 0, 0);
    canvas.stroke(0, 0, 255);
    canvas.strokeWeight(3);
    canvas.smooth();
    println("-- Canvas Setup Complete");
    
    // setup Syphon server
    println("Setup Spout");
    setupSpoutServer("Depth", canvasWidth, canvasHeight);
    
    // setup Kinect tracking
    println("Setup OpenNI");
    setupOpenNI();
    setupOpenNI_CameraImageMode();
    
    // setup OSC
    println("Setup OSC");
    setupOSC();
    
    // setup the exit handler
    println("Setup Exit Handerl");
    prepareExitHandler();
    

    }

    void draw() { // update the cam context.update();

    canvas.beginDraw();
    
    // draw image
    OpenNI_DrawCameraImage();
    
    // draw the skeleton if it's available
    if (kDrawSkeleton) {
    
        int[] userList = context.getUsers();
        for (int i=0; i<userList.length; i++)
        {
            if (context.isTrackingSkeleton(userList[i]))
            {
                canvas.stroke(userClr[ (userList[i] - 1) % userClr.length ] );
    
                drawSkeleton(userList[i]);
    
                if (userList.length == 1) {
                    sendOSCSkeleton(userList[i]);
                }
            }      
    
            // draw the center of mass
            if (context.getCoM(userList[i], com))
            {
                context.convertRealWorldToProjective(com, com2d);
    
                canvas.stroke(100, 255, 0);
                canvas.strokeWeight(1);
                canvas.beginShape(LINES);
                canvas.vertex(com2d.x, com2d.y - 5);
                canvas.vertex(com2d.x, com2d.y + 5);
                canvas.vertex(com2d.x - 5, com2d.y);
                canvas.vertex(com2d.x + 5, com2d.y);
                canvas.endShape();
    
                canvas.fill(0, 255, 100);
                canvas.text(Integer.toString(userList[i]), com2d.x, com2d.y);
            }
        }
    }
    
    canvas.endDraw();
    
    image(canvas, 0, 0);
    
    // send image to spout
    server.sendTexture();
    

    }

    // draw the skeleton with the selected joints void drawLimb(int userId, int inJoint1) { }

    // draw the skeleton with the selected joints void drawSkeleton(int userId) { canvas.stroke(255, 255, 255, 255); canvas.strokeWeight(3);

    drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
    
    drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
    drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
    drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
    
    drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
    drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
    drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
    
    drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
    drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
    
    drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
    drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
    drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
    
    drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
    drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
    drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);
    

    }

    void drawLimb(int userId, int jointType1, int jointType2) { float confidence;

    // draw the joint position
    PVector a_3d = new PVector();
    confidence = context.getJointPositionSkeleton(userId, jointType1, a_3d);
    PVector b_3d = new PVector();
    confidence = context.getJointPositionSkeleton(userId, jointType2, b_3d);
    
    PVector a_2d = new PVector();
    context.convertRealWorldToProjective(a_3d, a_2d);
    PVector b_2d = new PVector();
    context.convertRealWorldToProjective(b_3d, b_2d);
    
    canvas.line(a_2d.x, a_2d.y, b_2d.x, b_2d.y);
    

    }

    // ----------------------------------------------------------------- // SimpleOpenNI events

    void onNewUser(SimpleOpenNI curContext, int userId) { println("onNewUser - userId: " + userId); println("\tstart tracking skeleton");

    curContext.startTrackingSkeleton(userId);
    

    }

    void onLostUser(SimpleOpenNI curContext, int userId) { println("onLostUser - userId: " + userId); }

    void onVisibleUser(SimpleOpenNI curContext, int userId) { //println("onVisibleUser - userId: " + userId); }

    void keyPressed() { switch(key) { case ' ': context.setMirror(!context.mirror()); println("Switch Mirroring"); break; } }

  • Kinect with Processing on a Mac

    Hi I have an interactive project which I now need to implement Kinect on for the client - it's to allow for gesture / finger painting type interaction.

    It was built in Processing 3 on an iMac El Capitan.

    Some questions:

    1. Which library do I need ? OpenNI ? is it compatible with Processing 3?
    2. If it's not possible to develop on a Mac - if I develop on a PC - will it still run on a Mac if exported as a stand alone app?
    3. What is the best Kinect model to get?
    4. Do I need any extra cables?

    Thanks for any help, Glenn.

  • SimpleOpenNI Library error occurs after update processing

    I am trying to make a project with kinect v.1 and processing 3.2.1. Before I updated the processing, everything was working (I just tried the "Hello" examples of Daniel Shiffman). But now I use processing 3.2.1 and "SimpleOpenNI library can not find" error occurs. I deleted all libraries and downloaded them again. I download the library from this link: https://code.google.com/archive/p/simple-openni/downloads and even I download the library in ..sketchfolder/libraries , I can't run the example codes. I am using Windows 10. How can I fix this problem? I just want to run the example codes simply.

  • [Resolved]Hand tracking with Kinect/Processing

    Hi,

    I am trying to make a project with Processing and the Kinect, I already installed the right library (I use OpenNI and FingerTracker), everything seems to work. I followed a tutorial, which showed how to make the kinect detect our hands, especially our fingers. It's this one :

    import fingertracker.*;
    import SimpleOpenNI.*;
    
    FingerTracker fingers;
    SimpleOpenNI kinect;
    int threshold = 625;
    
    void setup() {
      size(640, 480);
    
    
      kinect = new SimpleOpenNI(this);
      kinect.enableDepth();
      kinect.setMirror(true);
    
      fingers = new FingerTracker(this, 640, 480);
      fingers.setMeltFactor(100);
    }
    
    void draw() {
    
      kinect.update();
      PImage depthImage = kinect.depthImage();
      image(depthImage, 0, 0);
    
    
      fingers.setThreshold(threshold);
    
    
      int[] depthMap = kinect.depthMap();
      fingers.update(depthMap);
    
    
      stroke(0,255,0);
      for (int k = 0; k < fingers.getNumContours(); k++) {
        fingers.drawContour(k);
      }
    
      // iterate over all the fingers found
      // and draw them as a red circle
      noStroke();
      fill(255,0,0);
      for (int i = 0; i < fingers.getNumFingers(); i++) {
        PVector position = fingers.getFinger(i);
        ellipse(position.x - 5, position.y -5, 10, 10);
      }
    
    
      fill(255,0,0);
      text(threshold, 10, 20);
    }
    
    
    void keyPressed(){
      if(key == '-'){
        threshold -= 10;
      }
    
      if(key == '='){
        threshold += 10;
      }
    }
    

    Everything works great, but I'm trying to make it detect when my fingers are on a certain location of the window. I am creating a picture with Photoshop, which will be displayed on the screen in Processing, and I want the JPG to have locations in which several things happen when my fingers touch these spaces (for example some objects which appear suddenly, other windows opening...). Is it possible ? How can I make it ?

    Thank you for your future answers.

  • Gesture recognition and interactive animation using Kinect

    Hi, Sofia,

    if you feel confortable in mac, perfect, work in mac. In my previous message I sent you a link for the installation of all openni requirements in mac, so, please, take your time to complete the steps. You need to install xcode, that sure you have already.

    As Jeremy said, its easier to begin with processing. For the part of the silhouette as a mask, you really dont need simple openni, with openkinect you can get it perfectly. As you said, use the pixels inside the silhouette for showing the content of what you want. But when we are talking about hand recognition, here is when you need something more potent, like simpleopenni. About interaction with content, mmmm.. maybe you can establish a movement route. For example: if the position of left hand is up and right hand is down and hips are moving, do the following… It’s just an idea. Or check speed movement calculating the distance between the same node vector in two correlative frames, etc etc Have this helped you?

  • SimpleOpenNI Library error

    /* -------------------------------------------------------------------------- * SimpleOpenNI UserCoordsys Test * -------------------------------------------------------------------------- * Processing Wrapper for the OpenNI/Kinect library * http://code.google.com/p/simple-openni * -------------------------------------------------------------------------- * prog: Max Rheiner / Interaction Design / zhdk / http://iad.zhdk.ch/ * date: 05/06/2012 (m/d/y) * ---------------------------------------------------------------------------- * This example shows how to setup a user defined coordiate system. * You have to devine the new nullpoint + the x/z axis. * This can be also usefull if you work with two independend cameras * ---------------------------------------------------------------------------- */

    import SimpleOpenNI.*;

    final static int CALIB_START = 0; final static int CALIB_NULLPOINT = 1; final static int CALIB_X_POINT = 2; final static int CALIB_Z_POINT = 3; final static int CALIB_DONE = 4;

    SimpleOpenNI context; boolean screenFlag = true; int calibMode = CALIB_START;

    PVector nullPoint3d = new PVector(); PVector xDirPoint3d = new PVector(); PVector zDirPoint3d = new PVector(); PVector tempVec1 = new PVector(); PVector tempVec2 = new PVector(); PVector tempVec3 = new PVector();

    PMatrix3D userCoordsysMat = new PMatrix3D();

    void setup() {
    size(640, 480); smooth();

    context = new SimpleOpenNI(this);

    context.setMirror(false);

    // enable depthMap generation if (context.enableDepth() == false) { println("Can't open the depthMap, maybe the camera is not connected!"); exit(); return; }

    if (context.enableRGB() == false) { println("Can't open the rgbMap, maybe the camera is not connected or there is no rgbSensor!"); exit(); return; }

    // align depth data to image data context.alternativeViewPointDepthToImage();

    // Create the font textFont(createFont("Georgia", 16)); }

    void draw() {
    // update the cam context.update();

    if (screenFlag) image(context.rgbImage(), 0, 0); else image(context.depthImage(), 0, 0);

    // draw text background pushStyle(); noStroke(); fill(0,200,0,100); rect(0,0,width,40); popStyle();

    switch(calibMode) { case CALIB_START: text("To start the calibration press SPACE!", 5, 30); break; case CALIB_NULLPOINT: text("Set the nullpoint with the left mousebutton", 5, 30); break; case CALIB_X_POINT: text("Set the x-axis with the left mousebutton", 5, 30); break; case CALIB_Z_POINT: text("Set the z-axis with the left mousebutton", 5, 30); break; case CALIB_DONE: text("New nullpoint is defined!", 5, 30); break; }

    // draw drawCalibPoint();

    // draw the user defined coordinate system // with the size of 500mm if (context.hasUserCoordsys()) { PVector temp = new PVector(); PVector nullPoint = new PVector();

    pushStyle();
    
    strokeWeight(3);
    noFill();        
    
    context.convertRealWorldToProjective(new PVector(0, 0, 0), tempVec1);  
    stroke(255, 255, 255, 150);
    ellipse(tempVec1.x, tempVec1.y, 10, 10); 
    
    context.convertRealWorldToProjective(new PVector(500, 0, 0), tempVec2);        
    stroke(255, 0, 0, 150);
    line(tempVec1.x, tempVec1.y, 
    tempVec2.x, tempVec2.y); 
    
    context.convertRealWorldToProjective(new PVector(0, 500, 0), tempVec2);        
    stroke(0, 255, 0, 150);
    line(tempVec1.x, tempVec1.y, 
    tempVec2.x, tempVec2.y); 
    
    context.convertRealWorldToProjective(new PVector(0, 0, 500), tempVec2);        
    stroke(0, 0, 255, 150);
    line(tempVec1.x, tempVec1.y, 
    tempVec2.x, tempVec2.y); 
    
    popStyle();
    

    } }

    void drawCalibPoint() { pushStyle();

    strokeWeight(3); noFill();

    switch(calibMode) { case CALIB_START:
    break; case CALIB_NULLPOINT: context.convertRealWorldToProjective(nullPoint3d, tempVec1);

    stroke(255, 255, 255, 150);
    ellipse(tempVec1.x, tempVec1.y, 10, 10);  
    break;
    

    case CALIB_X_POINT: // draw the null point context.convertRealWorldToProjective(nullPoint3d, tempVec1); context.convertRealWorldToProjective(xDirPoint3d, tempVec2);

    stroke(255, 255, 255, 150);
    ellipse(tempVec1.x, tempVec1.y, 10, 10);  
    
    stroke(255, 0, 0, 150);
    ellipse(tempVec2.x, tempVec2.y, 10, 10);  
    line(tempVec1.x, tempVec1.y, tempVec2.x, tempVec2.y);
    
    break;
    

    case CALIB_Z_POINT:

    context.convertRealWorldToProjective(nullPoint3d, tempVec1);
    context.convertRealWorldToProjective(xDirPoint3d, tempVec2);
    context.convertRealWorldToProjective(zDirPoint3d, tempVec3);
    
    stroke(255, 255, 255, 150);
    ellipse(tempVec1.x, tempVec1.y, 10, 10);  
    
    stroke(255, 0, 0, 150);
    ellipse(tempVec2.x, tempVec2.y, 10, 10);  
    line(tempVec1.x, tempVec1.y, tempVec2.x, tempVec2.y);
    
    stroke(0, 0, 255, 150);
    ellipse(tempVec3.x, tempVec3.y, 10, 10);  
    line(tempVec1.x, tempVec1.y, tempVec3.x, tempVec3.y);
    
    break;
    

    case CALIB_DONE:

    break;
    

    }

    popStyle(); }

    void keyPressed() { switch(key) { case '1': screenFlag = !screenFlag; break; case ' ': calibMode++; if (calibMode > CALIB_DONE) { calibMode = CALIB_START; context.resetUserCoordsys(); } else if (calibMode == CALIB_DONE) {
    // set the calibration context.setUserCoordsys(nullPoint3d.x, nullPoint3d.y, nullPoint3d.z, xDirPoint3d.x, xDirPoint3d.y, xDirPoint3d.z, zDirPoint3d.x, zDirPoint3d.y, zDirPoint3d.z);

      println("Set the user define coordinatesystem");
      println("nullPoint3d: " + nullPoint3d);
      println("xDirPoint3d: " + xDirPoint3d);
      println("zDirPoint3d: " + zDirPoint3d);
    
      /*
      // test
      context.getUserCoordsysTransMat(userCoordsysMat);
      PVector temp = new PVector();
    
      userCoordsysMat.mult(new PVector(0, 0, 0), temp);         
      println("PVector(0,0,0): " + temp);
    
      userCoordsysMat.mult(new PVector(500, 0, 0), temp);        
      println("PVector(500,0,0): " + temp);
    
      userCoordsysMat.mult(new PVector(0, 500, 0), temp);        
      println("PVector(0,500,0): " + temp);
    
      userCoordsysMat.mult(new PVector(0, 0, 500), temp);
      println("PVector(0,0,500): " + temp);
      */
    }
    
    break;
    

    } }

    void mousePressed() { if (mouseButton == LEFT) { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

    switch(calibMode)
    {
    case CALIB_NULLPOINT:
      nullPoint3d.set(realWorldMap[index]);
      break;
    case CALIB_X_POINT:
      xDirPoint3d.set(realWorldMap[index]);
      break;
    case CALIB_Z_POINT:
      zDirPoint3d.set(realWorldMap[index]);
      break;
    }
    

    } else { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

    println("Point3d: " + realWorldMap[index].x + "," + realWorldMap[index].y + "," + realWorldMap[index].z);
    

    } }

    void mouseDragged() { if (mouseButton == LEFT) { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

    switch(calibMode)
    {
    case CALIB_NULLPOINT:
      nullPoint3d.set(realWorldMap[index]);
      break;
    case CALIB_X_POINT:
      xDirPoint3d.set(realWorldMap[index]);
      break;
    case CALIB_Z_POINT:
      zDirPoint3d.set(realWorldMap[index]);
      break;
    }
    

    }

    }

    each time i try to run this code i get this error: Can't load SimpleOpenNI library (SimpleOpenNI64) : java.lang.UnsatisfiedLinkError: C:\Users\maryl\OneDrive\Documents\Processing\libraries\SimpleOpenNI\library\SimpleOpenNI64.dll: Can't find dependent libraries Verify if you installed SimpleOpenNI correctly. http://code.google.com/p/simple-openni/wiki/Installation A library relies on native code that's not available. Or only works properly when the sketch is run as a 32-bit application.

  • Gesture recognition and interactive animation using Kinect

    Hi, Sofia,

    I dont really know your programming level, so maybe working in processing should be useful. first of all is important to know in which platform you are going to work. Here I send you a link to installation of simple openni in mac, el capitan. Dued to the fact that openni is not opensource anymore, i think apple has bought it, you may have some problems for installing but for me, that link that I passed to you worked. If you want to work in windows, it can be useful for you to install the kinect sdk and test it. Which kind of interactions are you pretending of the people with the animations?

  • simpleOpenNi body tracking doesn't work on mavericks!

    Sorry for being so many time disconnected of this thread. Recently I needed again to set up simple openni, this time for El Capitan. This link worked like a charm.

    Hope it to be useful for you

  • Help with SimpleOpenNI

    So I ame getting the error:

    "SimpleOpenNI Error: Can't open device: DeviceOpen using default: no devices found Can't init SimpleOpenNI, maybe the camera is not connected!"

    when loading a sketch with SimpleOpenNI. I'm on windows 8.1 with Kinect v1 1414, processing 2.2.1, kinect sdk 1.8, SimpleOpenNI 1.96, OpenNI/NITE Win64 0.27 (from https://code.google.com/archive/p/simple-openni/downloads).

    It seems that it the library is not commuicated with the kinect. The kinect works fine as I am able to get it running kinect4winSDK.

    I found this

    https://forum.processing.org/two/discussion/comment/2677#Comment_2677

    And it suggests I should switch out the libfreenect.0.1.2.dylib and libusb-1.0.0.dylib from

    https://github.com/kronihias/head-pose-estimation/tree/master/mac-libs

    The directory mentioned is for mac, so I havent tried this for my windows 8.1 yet. Not sure where the directory is. And the link supplies MAC libfreenect.0.1.2.dylib and libusb-1.0.0.dylib.

    Anyways anyone have experience with this can help guide me? Thanks

  • Anyone been able to install Processing 2.1 and import SimpleOpenNi library on RPi3?

    But do any newer versions support openni? I know structure sensor opened. OpenNi 2.2, I just don't know if the new processing version for raspberry pi will support it

  • Anyone been able to install Processing 2.1 and import SimpleOpenNi library on RPi3?

    So I have been trying to install Processing 2.2.1 on the Raspberry Pi 3 Model B since it was the last version to support the most powerful (used to be open-sourced) libraries for the Kinect. I currently have the Processing 2.1 installed on my Mac and the simpleOpenNi libraries. I now want to see if the RPi3 can as well. I've been seeing that some people on this forum that have Processing 2 installed already on their RPi2 so I was wondering if someone could give me some detailed steps on how to build it as I receive errors after trying to run it:

        ...java/bin/java: "weird ASCII characters here" not found
        /home/pi/Desktop/processing-2.2.1/java/bin/java: 2: /home/pi/Desktop/processing-2.2.1/java/bin/java: Syntax error: "(" unexpected
    

    As for the SimpleOpenNi, any direction on building it would be greatly appreciated as I get errors:

        /Installing OpenNI
        *********************************
    
        copying shared libraries...OK
        copying executables...OK
        copying include files...OK
        creating database directory...OK
        registering module 'libnimMockNodes.so' .../usr/bin/niReg: 1: /usr/bin/niReg: Syntax error: word unexpected (expecting ")")
    
  • Kinect Physics Example updated for Processing 3 and openKinect library

    I updated Amnon Owed's fantastic example of using Kinect with blob detection and box2D physics in his Creative Applications tutorial. The openNI library he used doesn't work in Processing 3 so I got Shiffman's openKinect library to do it. Also building off the work of some people on this Processing thread. Only tested on Kinect for xbox 360 V1 model 1473 so far. Here it is..