How to delay the feed of the Kinect 2 - Point Cloud

edited December 2016 in Kinect

Hello Processing people!

I am encountering some difficulties with a processing sketch that I've been working on: I am trying to make it so that when I move in front of my Kinect 2, I see myself move on the sketch a few seconds (or frames) later.

I managed to do it with the depthThreshold Image, but no luck with the Point Cloud as my code only works with PImage. I'm guessing the best way to do this would be to store the array of data from the depth map created by the Kinect and recalling (drawing it) it later (after 3 seconds for instance) therefore creating the required delay effect, but I'm having trouble figuring out which part of the data needs to be stored and how. Any help or advice would be really appreciated!

Thanks a lot.

(I've posted part of my code below, if that helps).

// Draw
void draw() {
  background(0);

  // Translate and rotate

  pushMatrix();
  translate(width/2, height/2, 50);
  rotateY(a);

    // We're just going to calculate and draw every 2nd pixel
    int skip = 2;

    // Get the raw depth as array of integers
    int[] depth = kinect2.getRawDepth();

    // Creating the point cloud from the raw depth data of the kinect 2
    stroke(255);
    beginShape(POINTS);
    for (int x = 0; x < kinect2.depthWidth; x+=skip) {
      for (int y = 0; y < kinect2.depthHeight; y+=skip) {
        int offset = x + y * kinect2.depthWidth;
        int d = depth[offset];
        //calculte the x, y, z camera position based on the depth information
        PVector point = depthToPointCloudPos(x, y, d);

        // only draw the part between minThresh & maxThresh 
        if (d > minThresh && d < maxThresh){
        // Draw a point
        vertex(point.x, point.y, point.z);
        }
      }
    }
    endShape();

  popMatrix();
}

//calculte the xyz camera position based on the depth data
PVector depthToPointCloudPos(int x, int y, float depthValue) {
  PVector point = new PVector();
  point.z = (depthValue);// / (1.0f); // Convert from mm to meters
  point.x = (x - CameraParams.cx) * point.z / CameraParams.fx;
  point.y = (y - CameraParams.cy) * point.z / CameraParams.fy;
  return point;

}
Tagged:

Answers

  • Answer ✓

    i would save the smallest set of data, the data AFTER you've filtered out the bits you're not interested in. and i'd save it after any transformations have been done so that the only thing you need to do with it after the delay is iterate through it and display it. in this case, just save the points you use in line 30.

    but 3 seconds is a lot of data at 60fps

  • Hi koogs, thanks a lot for the quick reply! It was really helpful. However I'm still struggling with figuring out how to store the vertex points (x,y and z) for each "frame" within an appropriate array, would you have any advice for this... So far I've been looking at saving the "frames" within a PGraphics element and recalling them after with a short delay (more like 1 second in the end) but this also seems to be depreciated by the Kinect 2 and Processing 3 . Thanks again for the help!

  • Thanks again for your help koogs! I managed to get it working in the end, following your advice. I'll leave the "delay" part of the code here so that anyone who's looking to tackle a similar problem can hopefully get inspired:

    //defining the arrays ArrayList <ArrayList> pointClouds; int count = 30; // number of frames to store before drawing.

    void setup() {

    pointClouds = new ArrayList<ArrayList>();

    }

    void draw() {

    ArrayList pointCloud = new ArrayList(); if(pointClouds.size() > count) { pointClouds.remove(0); //removing "frame" 0 after it has been drawn. }

    pointClouds.add(pointCloud);

    // We're just going to calculate and draw every 2nd pixel
    int skip = 2;
    
    // Get the raw depth as array of integers
    int[] depth = kinect2.getRawDepth();
    
    // Creating the point cloud from the raw depth data of the kinect 2
    
    for (int x = 0; x < kinect2.depthWidth; x+= skip) {
      for (int y = 0; y < kinect2.depthHeight; y+=skip) {
        int offset = x + y * kinect2.depthWidth;
        int d = depth[offset];
        //calculte the x, y, z camera position based on the depth information
        PVector point = depthToPointCloudPos(x, y, d);
          pointCloud.add(point); //adding the points to the pointCloud array
        }
      }
    }
    

    // Drawing each "frame" with a delay of n frames. pushMatrix();

    stroke(255); beginShape(POINTS);

    if(pointClouds.size() > count) { //once the array is full, start drawing. ArrayList currentPointCloud = pointClouds.get(0); // drawing "frame" 0 for(int i = 0; i < currentPointCloud.size(); i++) { PVector vector = (PVector)currentPointCloud.get(i); vertex(-vector.x, vector.y, vector.z); } } endShape(); popMatrix();

    }

    //calculte the xyz camera position based on the depth data PVector depthToPointCloudPos(int x, int y, float depthValue) { PVector point = new PVector(); point.z = (depthValue);// / (1.0f); // Convert from mm to meters point.x = (x - CameraParams.cx) * point.z / CameraParams.fx; point.y = (y - CameraParams.cy) * point.z / CameraParams.fy; return point; }

  • cool. i have a draft reply here that i never got around to sending because i wasn't sure about it 8) it is very similar to what you ended up with:

    an array of ArrayLists of PVectors. you know how many frames worth of data you want to save (the array) but you don't know how many pixels you'll be storing in each frame (the arrayList).

  • Hi Aurel! I am currently trying to have a delay on a kinect with a depth threshold image but without success, I see you wrote that you did it, could you maybe help me ?

Sign In or Register to comment.