In a room of 5x5 meters I have to find a person's position (x,y).

Hi,

In a room of 5x5 meters, I have to determine the position (x,y) of people. I tried with HC-SR05 sensors but they are not accurate, they are wrong with off-scale values. Can I use kinect? One is enough? Is there a code that has addressed this problem?

Thanks for your help.

Answers

  • I believe that you cannot use multiple Kinects due to the way the Kinect IR projector works -- unless you mount them on the ceiling and make sure that their projector throws for a non-overlapping checkerboard on the ground. To understand why, watch the end of this video:

    For a single Kinect the depth image can only give depth for silhouettes within its line of sight.

  • OK thanks. At this point i have a problem, what can i use as a sensor?

  • Is the ceiling high enough that you could use one Kinect mounted on the ceiling?

    Are these performers, or crowds? Will people be in front of / behind each other?

    Not enough information in your description.

  • Unfortunately, I do not know before how the room will be. I thought, I would put the kinect on audio monitor stand. There will be no more than two people in the room. The person closest to the kinect will determine the position. People look at a picture under the kinect

  • Oh -- I thought you needed an x,y for every person in a crowd!

    If you only need the x,y for the nearest person in view of the camera then finding it with a Kinect on a stand should be easy.

    https://github.com/jagracar/kinectSketches/tree/master/src/jagracar/kinect/util

  • I'll try to import the logic of localization of the floor.java in processing. Thanks.

  • edited June 2017

    I can't understand how to convert the kinect (pixel) position in room (meters) position. From shiffman.net I used the code:

    /**************/

    float rawDepthToMeters(int depthValue) {
      if (depthValue < 2047) {
        return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
      }
      return 0.0f;
    }
    
    
    // Only needed to make sense of the ouput depth values from the kinect
    PVector depthToWorld(int x, int y, int depthValue) {
    
      final double fx_d = 1.0 / 5.9421434211923247e+02;
      final double fy_d = 1.0 / 5.9104053696870778e+02;
      final double cx_d = 3.3930780975300314e+02;
      final double cy_d = 2.4273913761751615e+02;
    
    // Drawing the result vector to give each point its three-dimensional space
      PVector result = new PVector();
      double depth =  depthLookUp[depthValue];//rawDepthToMeters(depthValue);
      result.x = (float)((x - cx_d) * depth * fx_d);
      result.y = (float)((y - cy_d) * depth * fy_d);
      result.z = (float)(depth);
      return result;
    }
    

    /***********/

    but the values returned are not meters. Or at least so it seems to me.

    Thanks

  • Someone can help me?

  • Edit your post (gear icon in the top right corner of your post), select your code and hit ctrl+o to format your code. Make sure there is an empty line above and below your code.

    Kf

Sign In or Register to comment.