Help with Kinect

edited July 2015 in Kinect

Hello guys,

I'm trying to adjust a detail of an animation with Kinect, to achieve not the expected result, will try to explain, already sorry, my native language is Portuguese ...

Below is an animation of a print (based on obtivos examples OpenProcessing - do not put links), where I am making some adjustments.

It works like this:

As the mouse moves the bird as captured coordinates when enabled kinect, move the mouse is substiuído by kinect coordinates through the Robot class ...

When capturing the image (kinect) at a distance, okay, okay ... kinect capture coordinated, and is made reescala to move across the screen.

My problem is you need to track your hand when you're sitting, because the objective of the project is to benefit those who are in a wheel chair for example, with little mobility.

With Kinect seems we have a good chance to use, but ran into this detail, that the model was wearing, only captures the coordinates standing.

But I found in other code (If necessary, put the links):

The question then is: When you increase the screen size (larger than the resolution of the Kinect), an error occurs, from what I saw he can not do the loop and an area larger than the kinect.

Below is the sample code ...

I do not know if get the idea, anyway thank I thank the attention.

Thank you,

Trecho do código:

       void atualizaKinect(){
          kinect.update(); // update the camera

          if (rastreamentoIMG){
            // let's be honest. This is kind of janky. We're just setting a huge(ish) value for the initial value of closestValue.   
            // This totally works but it's not very elegant IMO.  
            closestValue = 8000;  

            // this initializes and places the values from the kinect depth   
            // image into our single dimensional array.  
            int [] depthValues = kinect.depthMap();   

            /*for (int j = 0; j <= depthValues.length; j++){
              depthValues[j]=  int(map(depthValues[j], 0, w, 0, 640));
            }*/

            // this breaks our array down into rows  
            for(int y = 0; y < 480; y++ ){  
              // this breaks our array down into specific pixels in each row  
              for(int x = 0; x < 640; x++){  
                // this pulls out the specific array position  
                int i = x + y * 640;  
                int current = depthValues[i];  

                //now we're on to comparing them!  
                if ( current > 0 && current < closestValue){  
                  closestValue = current;  
                  closestX = x;  
                  closestY = y;  
                }
              }
              // draw the depth image on the screen  
              image(kinect.depthImage(), 0, 0);  

              //if (desenhaEsqueleto){
                // draw that swanky red circle identifying it  
                fill(255, 0, 0); //This sets the colour to red  
                ellipse(closestX, closestY, 25, 25);

                //Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
                pX = int(map(closestX, 0, 640, 0, w));
                pY = int(map(closestY, 0, 480, 0, h));
              //}    
            }  
          } else {
            kinectDepth = kinect.depthImage(); // get Kinect data 
            userID      = kinect.getUsers(); // get all user IDs of tracked users

            // loop through each user to see if tracking
            for(int i=0;i<userID.length;i++){
              // if Kinect is tracking certain user then get joint vectors
              if(kinect.isTrackingSkeleton(userID[i])){
                // get confidence level that Kinect is tracking head
                confidence = kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_HEAD, confidenceVector);

                // if confidence of tracking is beyond threshold, then track user -- Compara valor Capturado ( "Calibragem" )
                if(confidence > confidenceLevel){
                  // get 3D position of head/hand/foot
                  kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_HEAD, headPosition);

                  kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_RIGHT_HAND, handPositionRight);
                  kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_LEFT_HAND, handPositionLeft);

                  kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_RIGHT_FOOT, footPositionRight);
                  kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_LEFT_FOOT, footPositionLeft);

                  // convert real world point to projective space
                  kinect.convertRealWorldToProjective(headPosition,headPosition);

                  kinect.convertRealWorldToProjective(handPositionRight, handPositionRight);
                  kinect.convertRealWorldToProjective(handPositionLeft, handPositionLeft);

                  kinect.convertRealWorldToProjective(footPositionRight, footPositionRight);
                  kinect.convertRealWorldToProjective(footPositionLeft, footPositionLeft);

                  // create a distance scalar related to the depth in z dimension - get 3D position of hand
                  distanceScalarHead      = 525/headPosition.z;
                  distanceScalarHandRight = 525/handPositionRight.z;
                  distanceScalarHandLeft  = 525/handPositionLeft.z;
                  distanceScalarFootRight = 525/footPositionRight.z;
                  distanceScalarFootLeft  = 525/footPositionLeft.z;

                  //Atualiza variáves globais, pX e pY, para substituir o movimento do Mouse com o Kinect
                  //pX = int(handPositionRight.x);
                  //pY = int(handPositionRight.y);

                  //Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
                  pX = int(map(int(handPositionRight.x), 0, 640, 0, w));
                  pY = int(map(int(handPositionRight.y), 0, 480, 0, h));

                  distanciaHandRight = dist(posX, posY, handPositionRight.x, handPositionRight.y); 
                  distanciaHead      = dist(posX, posY, headPosition.x, headPosition.y);

                  stroke(userColor[(i)]);  //Change draw color based on hand id#
                  fill(userColor[(i)]);    //Fill the ellipse with the same color

                  //println("pX/pY - 1 : " + pX + " - " + pY);

                  if (desenhaEsqueleto) drawSkeleton(userID[i]); //Draw the rest of the body
                }
              }
            }
          }
        }

Menu01 Passaro01

Tagged:

Answers

  • Hello again,

    I think it's not very clear in the previous post which need problem solving, I will try to explain better.

    This code works:

        import SimpleOpenNI.*;  
        SimpleOpenNI Kinect;  
    
        int closestValue;  
        int closestX;  
        int closestY;  
    
        void setup() {  
          size(640, 480);
          //size(800, 600);  
          Kinect = new SimpleOpenNI(this);  
          Kinect.enableDepth();   
        }  
    
         void draw(){  
          closestValue = 8000;  
    
          Kinect.update();  
    
          int [] depthValues = Kinect.depthMap();   
    
          // this breaks our array down into rows  
          for(int y = 0; y < 480; y++ ){
          //for(int y = 0; y < 600; y++ ){  
    
           // this breaks our array down into specific pixels in each row  
           for(int x = 0; x < 640; x++){  
           //for(int x = 0; x < 800; x++){   
            // this pulls out the specific array position  
            int i = x + y * 640;
            //int i = x + y * 800;  
            int current = depthValues[i];  
    
            //now we're on to comparing them!  
            if ( current > 0 && current < closestValue){  
             closestValue = current;  
             closestX = x;  
             closestY = y;  
            }  
           }  
          }  
    
      // draw the depth image on the screen  
      image(Kinect.depthImage(), 0, 0);  
    
      // draw that swanky red circle identifying it  
      fill(255, 0, 0); //This sets the colour to red  
      ellipse(closestX, closestY, 25, 25);  
    
      //Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
      int pX = int(map(closestX, 0, 640, 0, width));
      int pY = int(map(closestY, 0, 480, 0, height));
    
      fill(0, 0, 255); 
      ellipse(pX, pY, 25, 25);
    }  
    

    But with this other code does not work, that is, changed the parameter related to the screen size.

    changed code:

    import SimpleOpenNI.*;  
    SimpleOpenNI Kinect;  
    
    int closestValue;  
    int closestX;  
    int closestY;  
    
    void setup() {  
      //size(640, 480);
      size(800, 600);  
      Kinect = new SimpleOpenNI(this);  
      Kinect.enableDepth();   
    }  
    
     void draw(){  
      closestValue = 8000;  
    
      Kinect.update();  
    
      int [] depthValues = Kinect.depthMap();   
    
      // this breaks our array down into rows  
      //for(int y = 0; y < 480; y++ ){
      for(int y = 0; y < 600; y++ ){  
    
       // this breaks our array down into specific pixels in each row  
       //for(int x = 0; x < 640; x++){  
       for(int x = 0; x < 800; x++){   
        // this pulls out the specific array position  
        //int i = x + y * 640;
        int i = x + y * 800;  
        int current = depthValues[i];  
    
        //now we're on to comparing them!  
        if ( current > 0 && current < closestValue){  
         closestValue = current;  
         closestX = x;  
         closestY = y;  
        }  
       }  
      }  
    
      // draw the depth image on the screen  
      image(Kinect.depthImage(), 0, 0);  
    
      // draw that swanky red circle identifying it  
      fill(255, 0, 0); //This sets the colour to red  
      ellipse(closestX, closestY, 25, 25);  
    
      //Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
      int pX = int(map(closestX, 0, 640, 0, width));
      int pY = int(map(closestY, 0, 480, 0, height));
    
      fill(0, 0, 255); 
      ellipse(pX, pY, 25, 25);
    }
    

    As I understand it, the kinect returns 640 x 480 resolution, I got around this by using the map function to rescaling the values in the screen size, but in this case I do not know how to do.

    The error that occurs is: ArrayIndexOutOfBoundsException: 307200

    The question then is:

    It would be possible to work with a larger screen than the resolution of this case kinect? How could I do that?

    Again many thanks to all.

  • José, I'm from Brazil too! I dont know how to help you! Take my facebook **Paulo Purcino **we can change expirience!

  • Olá Paulo,

    Obrigado, adicionar o contato...

    Estou colocar um post sobre o assunto que estou mencionando, pra ver se conseguiremos chegar a alguma solução.

    Obrigado,

  • Hello again,

    I think I still can not explain very well what needs to be done in these links is the most accurate idea of what I need, ie the track "skeleton" sitting.

    Youtube links:                         

    These links are connected to this: http://www.codeproject.com/Articles/260741/Sitting-posture-recognition-with-Kinnect-sensor

    In it, they explain some principles of how to crawl sitting, the problem is that the line of development is another in C #.

    Has anyone seen to have something similar in Processing? That is, as mentioned before, be able to do the tracking sitting.

    Similar discussion is also this link: http://forum.processing.org/two/discussion/3726/skeleton-alive

    Well, keep researching on the subject, any information would be helpful.

    Thank you all

  • I made a mess in our post in Portuguese, sorry Paul, lacked a few words ... rrsss ...

    thanks

  • Hello again,

    Another thing that I do not think I made it very clear, the reason for wanting to do the tracking sitting, is that design is intended to be used by the majority of people who use a wheelchair, and has little mobility, still believe that with the Kinect will have good possibilities of interaction.

    Thank you all for now.

  • Hello,

    I will leave a post here, it can be useful for other, related to solution I found, to the the tracking sitting, it is actually what was already this link:

    http://forum.processing.org/two/discussion/3726/skeleton-alive

    While not a perfect solutio yet, because it needs a certain movement to start, but a crucial point, was the tip of kinect position, which should be at chest height, so you can track sitting, the information is at this link (in Portuguese):

    https://support.xbox.com/pt-BR/xbox-360/kinect/accessibility-kinect#831aff2be3aa4736a8db1197d8c01c8e

    Now we can do some testing, we have achieved as we start using ...

    Thanks,

    import SimpleOpenNI.*;
    SimpleOpenNI Kinect;
    
    PrintWriter output;
    
    //... declaraciones de los vectores ...
    PVector rtKnee, leftKnee, rtShoulder, leftShoulder, rtElbow, leftElbow, rtHand, leftHand, rtFoot, leftFoot, rtHip, leftHip;
    
    //Demais variáveis
    boolean desenhaEsqueleto = true;
    boolean desenhaMaoDir    = false;
    boolean desenhaMaoEsq    = false;
    
    void setup(){
      Kinect = new SimpleOpenNI(this);
    
      // enable depthMap generation
      if(Kinect.enableDepth() == false){
        println("Can't open the depthMap, maybe the camera is not connected!");
        exit();
        return;
      }
    
      // enable skeleton generation for all joints
      //myKinect.enableUser(SimpleOpenNI.SKEL_PROFILE_ALL);
      Kinect.enableUser();
      Kinect.setMirror(false);
    
      //Enable the Depth Camera
      Kinect.enableDepth();
    
      smooth();
      size(Kinect.depthWidth(), Kinect.depthHeight());
      background(200,0,0);
    }
    
    void draw(){
      background(0);
    
      //Get new data from the Kinect
      Kinect.update();
    
      //Draw camera image on the screen
      //image(Kinect.rgbImage(),0,0);
      //image(Kinect.depthImage(),0,0);
    
      //Each new person is identified with a number, so draw up to 5 people
      for(int userId=1; userId<=5; userId++){
      //for (int userId=1; userId<=1;userId++){
        //Check to see if tracking
        if(Kinect.isTrackingSkeleton(userId)){
          stroke(255,0,0);
          strokeWeight(3);
    
          getBodyPoints(userId);
    
          if (desenhaEsqueleto){ 
            drawSkeleton(userId);
            //There are 24 possible joints that openNI tracks.  If we can get the point, draw it.    
            for(int bodyPart=1; bodyPart<=24; bodyPart++){
              //get the point as a vector
              PVector bodyPoint = getBodyPoint(userId, bodyPart);
              //System.out.println("Body point: " + bodyPoint);
    
              fill(0,255,0,128);
              ellipse(bodyPoint.x, bodyPoint.y, 15, 15);
            }
    
            //draw the head bigger -- Demonstrates use of Constant SKEL_HEAD
            PVector headPoint = getBodyPoint(userId, SimpleOpenNI.SKEL_HEAD);
            ellipse(headPoint.x, headPoint.y, 50,50);
          }
    
          if (desenhaMaoDir){
            fill(255, 0, 0);
            ellipse(rtHand.x, rtHand.y, 20, 20);  
          }
    
          if (desenhaMaoEsq){
            fill(255, 0, 0);
            ellipse(leftHand.x, leftHand.y, 20, 20);  
          }
    
          /*PVector rtHand2D     = new PVector(rtHand.x, rtHand.y);
          PVector rtElbow2D    = new PVector(rtElbow.x, rtElbow.y);
          PVector rtShoulder2D = new PVector(rtShoulder.x, rtShoulder.y); 
          PVector rtHip2D      = new PVector(rtHip.x, rtHip.y);
    
          // calcula los ejes contra los cuales queremos medir los ángulos
          PVector torsoOrientation = PVector.sub(rtShoulder2D, rtHip2D);
          PVector upperArmOrientation = PVector.sub(rtElbow2D, rtShoulder2D);
    
          // calcula los ejes contra los cuales queremos medir nuestros ángulos
          float shoulderAngle = angleOf(rtElbow2D, rtShoulder2D, torsoOrientation);
          float elbowAngle    = angleOf(rtHand2D, rtElbow2D, upperArmOrientation);
    
          // ... izquierda (¿?) ...
          PVector ltHand2D     = new PVector(leftHand.x, leftHand.y);
          PVector ltElbow2D    = new PVector(rtElbow.x, leftElbow.y);
          PVector ltShoulder2D = new PVector(rtShoulder.x, leftShoulder.y); 
          PVector ltHip2D      = new PVector(leftHip.x, leftHip.y);
    
          // calcula los ejes contra los cuales queremos medir nuestros ángulos
          PVector lttorsoOrientation = PVector.sub(ltShoulder2D, ltHip2D);
          float ltshoulderAngle      = angleOf(ltElbow2D, ltShoulder2D, lttorsoOrientation);
          float ltelbowAngle         = angleOf(rtHand2D, rtElbow2D, upperArmOrientation);
    
          rtShoulder.normalize();
          rtElbow.normalize();
          rtHand.normalize();
    
          leftShoulder.normalize();
          leftElbow.normalize();
          leftHand.normalize();
    
    
          println("pos h der:" + (rtShoulder.x) + "," + (rtShoulder.y) + "," + (rtShoulder.z) +
                  " án h der: " + int(shoulderAngle) +
                  " pos codo der: " + (rtElbow.x) + "," + (rtElbow.y) + "," + (rtElbow.z) +
                  " án codo der: " + int(elbowAngle) +
                  " pos man der: " + (rtHand.x) + "," + (rtHand.y) + "," + (rtHand.z) +
                  " pos h izq: " + (leftShoulder.x) + "," + (leftShoulder.y) + "," + (leftShoulder.z) +
                  " án h izq: " + int(ltshoulderAngle) + 
                  " pos codo izq: " + (leftElbow.x) + "," + (leftElbow.y) + "," + (leftElbow.z) +
                  " án codo izq: " + int(ltelbowAngle) + 
                  " pos man izq: " + (leftHand.x) + "," + (leftHand.y) + "," + (leftHand.z));*/
        }else{
          //print("Skeleton is not being tracked\n");
        }
      }
    }
    
    void keyPressed(){
      if (key == 'E' || key == 'e') desenhaEsqueleto = !desenhaEsqueleto;
      if (key == 'R' || key == 'r') desenhaMaoDir    = !desenhaMaoDir;
      if (key == 'L' || key == 'l') desenhaMaoEsq    = !desenhaMaoEsq; 
    }
    
    void getBodyPoints(int userId) {
      rtKnee       = getBodyPoint(userId, SimpleOpenNI.SKEL_RIGHT_KNEE);
      leftKnee     = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_KNEE);
      rtShoulder   = getBodyPoint(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
      leftShoulder = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER);
      rtElbow      = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_ELBOW);
      leftElbow    = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_ELBOW);
      rtHand       = getBodyPoint(userId, SimpleOpenNI.SKEL_RIGHT_HAND);
      leftHand     = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_HAND);
      rtFoot       = getBodyPoint(userId, SimpleOpenNI.SKEL_RIGHT_FOOT);
      leftFoot     = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_FOOT);
      rtHip        = getBodyPoint(userId, SimpleOpenNI.SKEL_RIGHT_HIP);
      leftHip      = getBodyPoint(userId, SimpleOpenNI.SKEL_LEFT_HIP);
    }
    
    /** Translate vector for point on Skeleton from REAL 3D space into
        2D projection space for drawing. **/
    PVector getBodyPoint(int user, int bodyPart) {
      PVector jointPos = new PVector(), jointPos_Proj = new PVector();
      Kinect.getJointPositionSkeleton(user, bodyPart, jointPos);
      Kinect.convertRealWorldToProjective(jointPos, jointPos_Proj);
    
      //System.out.println("Parte del cuerpo: " + bodyPart + " Posición " + jointPos);
      return jointPos_Proj;
    }
    
    float angleOf(PVector one, PVector two, PVector axis) {
      PVector limb = PVector.sub(two, one);
      return degrees(PVector.angleBetween(limb, axis));
    }
    
    /** draw the skeleton with the selected joints simple OpenNI has a method called drawLimb, 
        which simply draws a line between two body points */
    void drawSkeleton(int userId){
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
    
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);
    
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);
    
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
    
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);
    
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
      Kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); 
    }
    
    // SimpleOpenNI has a number of event handlers, that are triggered when "User events" occur.
    void onNewUser(SimpleOpenNI curContext, int userId){
      println("onNewUser - userId: " + userId);
      println("  start pose detection");
    
      Kinect.startTrackingSkeleton(userId);
    }
    
    void onLostUser(int userId){
      println("onLostUser - userId: " + userId);
    }
    
Sign In or Register to comment.