Get Depth Value for Each Pixel

edited April 2017 in Kinect

Hi All,

I'd really appreciate some help on this, I'm using the KinectPV2 library which is great but very poorly documented.

With reference to the example "MapDepthToColor" copied below I'm trying to retrieve the depth for each RGB pixel. I'm having a hard time decrypting what's going on, can someone help me out?

Thanks in advance, Charles

/*
Thomas Sanchez Lengeling.
http://codigogenerativo.com/

KinectPV2, Kinect for Windows v2 library for processing

Color to fepth Example,
Color Frame is aligned to the depth frame
*/

import KinectPV2.*;

KinectPV2 kinect;

int [] depthZero;

//BUFFER ARRAY TO CLEAN DE PIXLES
PImage depthToColorImg;

void setup() {
  size(1024, 848, P3D);

  depthToColorImg = createImage(512, 424, PImage.RGB);
  depthZero    = new int[ KinectPV2.WIDTHDepth * KinectPV2.HEIGHTDepth];

  //SET THE ARRAY TO 0s
  for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
    for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
      depthZero[424*i + j] = 0;
    }
  }

  kinect = new KinectPV2(this);
  kinect.enableDepthImg(true);
  kinect.enableColorImg(true);
  kinect.enablePointCloud(true);

  kinect.init();
}

void draw() {
  background(0);

  float [] mapDCT = kinect.getMapDepthToColor(); // Length: 434,176

  //get the raw data from depth and color
  int [] colorRaw = kinect.getRawColor(); // Length: 2,073,600

  //clean de pixels
  PApplet.arrayCopy(depthZero, depthToColorImg.pixels);

  int count = 0;
  depthToColorImg.loadPixels();
  for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
    for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {

      //incoming pixels 512 x 424 with position in 1920 x 1080
      float valX = mapDCT[count * 2 + 0];
      float valY = mapDCT[count * 2 + 1];

      //maps the pixels to 512 x 424, not necessary but looks better
      int valXDepth = (int)((valX/1920.0) * 512.0);
      int valYDepth = (int)((valY/1080.0) * 424.0);

      int  valXColor = (int)(valX);
      int  valYColor = (int)(valY);

      if ( valXDepth >= 0 && valXDepth < 512 && valYDepth >= 0 && valYDepth < 424 &&
        valXColor >= 0 && valXColor < 1920 && valYColor >= 0 && valYColor < 1080) {
        color colorPixel = colorRaw[valYColor * 1920 + valXColor];
        //color colorPixel = depthRaw[valYDepth*512 + valXDepth];
        depthToColorImg.pixels[valYDepth * 512 + valXDepth] = colorPixel;
      } 
      count++;
    }
  }
  depthToColorImg.updatePixels();

  image(depthToColorImg, 0, 424);
  image(kinect.getColorImage(), 0, 0, 512, 424);
  image(kinect.getDepthImage(), 512, 0);

  text("fps: "+frameRate, 50, 50);
}

Answers

  • By slightly changing the example I get the depth mapped to RGB space. Unfortunately, there is a strange duplication appearing. Notice how the tv appears twice in an overlap. (bottom left image)

    Any ideas how to fix this?

    /*
    Thomas Sanchez Lengeling.
    http://codigogenerativo.com/
    
    KinectPV2, Kinect for Windows v2 library for processing
    
    Color to fepth Example,
    Color Frame is aligned to the depth frame
    */
    
    import KinectPV2.*;
    
    KinectPV2 kinect;
    
    int [] depthZero;
    
    //BUFFER ARRAY TO CLEAN DE PIXLES
    PImage depthToColorImg;
    
    void setup() {
      size(1024, 848, P3D);
    
      depthToColorImg = createImage(512, 424, PImage.RGB);
      depthZero    = new int[ KinectPV2.WIDTHDepth * KinectPV2.HEIGHTDepth];
    
      //SET THE ARRAY TO 0s
      for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
        for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
          depthZero[424*i + j] = 0;
        }
      }
    
      kinect = new KinectPV2(this);
      kinect.enableDepthImg(true);
      kinect.enableColorImg(true);
      kinect.enablePointCloud(true);
    
      kinect.init();
    }
    
    void draw() {
      background(0);
    
      float [] mapDCT = kinect.getMapDepthToColor(); // 434176
      print(mapDCT.length, TAB);
    
      //get the raw data from depth and color
      int [] colorRaw = kinect.getRawColor(); // 434176
      println(mapDCT.length, KinectPV2.WIDTHDepth, KinectPV2.HEIGHTDepth);
    
      int [] depthRaw = kinect.getRawDepthData(); // 434176
    
      //clean de pixels
      PApplet.arrayCopy(depthZero, depthToColorImg.pixels);
    
      int count = 0;
      depthToColorImg.loadPixels();
      for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
        for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
    
          //incoming pixels 512 x 424 with position in 1920 x 1080
          float valX = mapDCT[count * 2 + 0];
          float valY = mapDCT[count * 2 + 1];
    
    
          //maps the pixels to 512 x 424, not necessary but looks better
          int valXDepth = (int)((valX/1920.0) * 512.0);
          int valYDepth = (int)((valY/1080.0) * 424.0);
    
          int  valXColor = (int)(valX);
          int  valYColor = (int)(valY);
    
          if ( valXDepth >= 0 && valXDepth < 512 && valYDepth >= 0 && valYDepth < 424 &&
            valXColor >= 0 && valXColor < 1920 && valYColor >= 0 && valYColor < 1080) {
            //color colorPixel = colorRaw[valYColor * 1920 + valXColor];
            float col = map(depthRaw[valYDepth * 512 + valXDepth], 0, 4500, 0, 255);
            color colorPixel = color(col);
            depthToColorImg.pixels[valYDepth * 512 + valXDepth] = colorPixel;
          } 
          count++;
        }
      }
      depthToColorImg.updatePixels();
    
      image(depthToColorImg, 0, 424);
      image(kinect.getColorImage(), 0, 0, 512, 424);
      image(kinect.getDepthImage(), 512, 0);
    
      text("fps: "+frameRate, 50, 50);
    }
    
  • One can see the aspect ratio of the TVs is not the same. Are you making sure you are dealing with the same aspect ration in your processes?

    What are the values of KinectPV2.WIDTHDepth and KinectPV2.HEIGHTDepth? Do they match the image dimension?

    Lines 27 to 31 can be changed to

     for (int i = 0; i < KinectPV2.WIDTHDepth* KinectPV2.HEIGHTDepth; i++) 
          depthZero[i] = 0;    
    

    Notice there are no hard-coded values there.

    Now, I am not an expert in kinetic and not familiar with the different values you are outputting there. However, I see the following discrepancy:

    Your depthToColorImg is 512 x 424 or a size of 217088

    What you get from kinetic is 434176 which is twice the value above. It must be because of the x/y data pair values. But then in line 67 you normalize it to your screen resolution. Why? From what I see, the object mapDCT is made of 2*(512x424) pixels (so are depthRaw and colorRaw but not relevant here).

    Do you have a link to the documentation of your module? You should posted. Also cross link to any previous references: https://forum.processing.org/two/discussion/21624/combine-rgb-data-with-body-track-to-only-show-rgb-of-bodies-kinectv2-kinectpv2-library#latest

    Kf

  • edited April 2017

    Thanks for your reply.

    Here's another picture to further illustrate the issue. When the color of the pixel is set to rawColor[] the result is as expected (see bottom right picture) now when I assign the color from rawDepth[] then the dup effect appears (bottom left image).

    Now the only difference between these two is their dimension: rawDepth[] is 512 x 424 or a size of 217,088 rawColor[]is 1920 x 1080 or a size of 2,073,600

    I can't seem to understand where in the code this difference is causing an issuse?!

    You did guess right,kinect.getMapDepthToColor() returns a length of 434,176 or 2*(512x424) which I assume "i+0" is the "x" and "i+1" is the "y" coordinate of the corresponding depth pixel.

    To further cross link the issue was also posted on the library git: https://github.com/ThomasLengeling/KinectPV2/issues/61

    The code from the image above:

    /*
    Thomas Sanchez Lengeling.
    <a href="http://codigogenerativo.com/" target="_blank" rel="nofollow">http://codigogenerativo.com/</a>;
    
    KinectPV2, Kinect for Windows v2 library for processing
    
    Color to fepth Example,
    Color Frame is aligned to the depth frame
    */
    
    import KinectPV2.*;
    
    KinectPV2 kinect;
    
    int [] depthZero;
    
    //BUFFER ARRAY TO CLEAN DE PIXLES
    PImage depthToColorImg;
    PImage depthToRGB;
    
    void setup() {
      size(1024, 848, P3D);
    
      depthToColorImg = createImage(512, 424, PImage.RGB);
      depthToRGB = createImage(512, 424, PImage.RGB);
      depthZero    = new int[ KinectPV2.WIDTHDepth * KinectPV2.HEIGHTDepth];
    
      //SET THE ARRAY TO 0s
      for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
        for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
          depthZero[424*i + j] = 0;
        }
      }
    
      kinect = new KinectPV2(this);
      kinect.enableDepthImg(true);
      kinect.enableColorImg(true);
      kinect.enablePointCloud(true);
    
      kinect.init();
    }
    
    void draw() {
      background(0);
    
      float [] mapDCT = kinect.getMapDepthToColor(); // 434,176
      print(mapDCT.length, TAB);
    
      //get the raw data from depth and color
      int [] colorRaw = kinect.getRawColor(); // 2,073,600
      println(mapDCT.length, KinectPV2.WIDTHDepth, KinectPV2.HEIGHTDepth);
    
      int [] depthRaw = kinect.getRawDepthData(); // 217,088
    
      //clean de pixels
      PApplet.arrayCopy(depthZero, depthToColorImg.pixels);
    
      int count = 0;
      depthToColorImg.loadPixels();
      depthToColorImg.loadPixels();
    
      for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
        for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
    
          //incoming pixels 512 x 424 with position in 1920 x 1080
          float valX = mapDCT[count * 2 + 0];
          float valY = mapDCT[count * 2 + 1];
    
    
          //maps the pixels to 512 x 424, not necessary but looks better
          int valXDepth = (int)((valX/1920.0) * 512.0);
          int valYDepth = (int)((valY/1080.0) * 424.0);
    
          int  valXColor = (int)(valX);
          int  valYColor = (int)(valY);
    
          if ( valXDepth >= 0 && valXDepth < 512 && valYDepth >= 0 && valYDepth < 424 &&
            valXColor >= 0 && valXColor < 1920 && valYColor >= 0 && valYColor < 1080) {
            color colorPixel = colorRaw[valYColor * 1920 + valXColor];
            depthToRGB.pixels[valYDepth * 512 + valXDepth] = colorPixel;
            float col = map(depthRaw[valYDepth * 512 + valXDepth], 0, 4500, 0, 255);
            colorPixel = color(col);
            depthToColorImg.pixels[valYDepth * 512 + valXDepth] = colorPixel;
    
          } 
          count++;
        }
      }
      depthToColorImg.updatePixels();
      depthToRGB.updatePixels();
    
      image(depthToColorImg, 0, 424);
      image(depthToRGB, 512, 424);
      image(kinect.getColorImage(), 0, 0, 512, 424);
      image(kinect.getDepthImage(), 512, 0);
    
      text("fps: "+frameRate, 50, 50);
    }
    
  • edited April 2017

    Ah, rawColor[]and rawDepth[] are, as you pointed out are in a different scale. But then how can I retrieve the depth for each RGB pixel, the result seems very close?

  • Can you say more about how rawColor[], rawDepth[] and mapDCT[] are related to each other?

    Kf

  • edited April 2017

    You did guess right,kinect.getMapDepthToColor() returns a length of 434,176 or 2*(512x424) which I assume "i+0" is the "x" and "i+1" is the "y" coordinate of the corresponding depth pixel.

    I believe the statement above is wrong, kinect.getMapDepthToColor() actually returns the index of the color pixel mapped to the space and scale of the depth image.

    Therefore, what I'm trying to do is actually the opposite - to get the depth value mapped the space and scale of the color image.

    Can you think of a way to reverse the logic of kinect.getMapDepthToColor() to achieve this?

  • edited April 2017 Answer ✓

    Here's the solution I've come up with, if someone comes up with something more elegant then please share.

    import KinectPV2.*;
    
    KinectPV2 kinect;
    
    ArrayList<Pair> sorted = new ArrayList<Pair>();
    
    void setup() {
      size(1920, 1080);
    
      frameRate(60);
    
      kinect = new KinectPV2(this);
      kinect.enableDepthImg(true);
      kinect.enableColorImg(true);
      kinect.enablePointCloud(true); // getMapDepthToColor() doesn't work without enabling point cloud
      kinect.init();
    }
    
    void draw() {
    
      PImage imgDepth = kinect.getDepthImage(); 
      PImage imgColor = kinect.getColorImage();
    
      sorted.clear(); // We clear the array ready for a new frame
    
      float [] mapDCT = kinect.getMapDepthToColor(); // 434,176
      int [] rawDepth = kinect.getRawDepthData();
    
      for (int i = 0; i < mapDCT.length; i+=2) { // We iterate in increments of 2 as i+0 is X and i+1 is Y
    
        if (mapDCT[i] > 0 && mapDCT[i] < width &&
          mapDCT[i+1] > 0 && mapDCT[i+1] < height && 
          rawDepth[i/2] != 0) {
          sorted.add( new Pair(i/2, mapDCT[i], mapDCT[i+1]));
        }
      }
    
      PImage img = createImage(width, height, RGB);
    
      img.loadPixels();
    
      for (int i = 0; i < sorted.size(); i++) {
        img.set((int)sorted.get(i).x, (int)sorted.get(i).y, color((int)map(rawDepth[sorted.get(i).index], 0, 4500, 0, 255)));
      }
    
      img.updatePixels();
      //tint(255, 127);
      image(img, 0, 0);
    
      fill(255);
      text(frameRate, 50, 50);
    }
    
    public class Pair{
      public final int index;
      public final float x;
      public final float y;
      public final float value;
    
      public Pair(int index, float x, float y) {
        this.index = index;
        this.value = y * width + x;
        this.x = x;
        this.y = y;
      }
    }
    
  • @CharlesDesign -- thank you for sharing your solution with the forum!

  • Two possible ways to slightly improve readability:

    for( pair pr:sorted){  
       img.set((int)pr.x, (int)pr.y, color((int)map(rawDepth[pr.index], 0, 4500, 0, 255)));
    }
    

    Or, since you are using loadPixels, then this next way is more efficient:

      for( pair pr:sorted){  
       int loc = pr.y * width + pr.x;
       img[loc]=color((int)map(rawDepth[pr.index], 0, 4500, 0, 255));
      }
    

    Both codes are untested as I don't have a unit myself to try this on. Also I will suggest moving line 38 to setup() and make sure the img gets a global scope.

    In the above code I am assuming sorted contains a representation of every pixel in the image. If this is not true then do not move line 38 to setup as you need to re-init all the pixels in the img object this way.

    Could you comment about the mapDCT object? Why do you get two pair values, referring to line 34? I am keen to know... Also from your code, it seems that the depth data and the mapDCT data is aligned on the 0,0 corner. The reason of your conditional in line 31 is to clip your mapDCT image, as to constrain it to your width and height of your sketch?

    So based on your previous pst, your code works as you wanted it?

    Kf

  • edited April 2017

    @jeremydouglass You're welcome!

    @kfrajer Thanks for the suggestion I'll try it out tomorrow but at first glance, it looks much better.

    • Line 34: As discussed before mapDCT() returns 434,176 values or 2*(512x424) i+0 being x and i+1 being the y position of the color pixel in the 1920x1080 space.

    •Line 31: I don't understand why but mapDCT sometimes returns values outside of its expected range (0-1920 & 0-1080), I've just decided to ignore those. The width & height should really be hard coded as 1920 1080!

    •It does what I initially set out to do - get the depth for every 2,073,600 colour pixel by storing the matching depth index with the XY position of the colour pixel. This way it is also possible to get the infrared and user mask image pixel for every colour pixel.

  • Thanks for your comments. It is too bad there is not enough documentation about this library. Does the library come with any examples at all? This is my second reference source when documentation is scarce or not available.

    It also seems to be a nice piece of hardware to learn how to program. It has potential to create high fidelity interactive sessions.

    Kf

  • The library is by far the best for processing, speed is much better than openKinect. There are around 10 examples without which the library would be unusable. There is a javadoc which only covers face detection.

Sign In or Register to comment.