Changing opacity of silhouettes

edited January 2018 in Kinect

Hi, Using Daniel Shiffmana's MinMaxThreshold tutorial, I was able to change the colour from red to blue to green based on their distance to the Kinect. I would like to make a wall where when 2 people walk past each other, their silhouette colours mix. I tried to play with opacity with a background image but wouldn't mix 2 different silhouettes detected by kinect. Should I use blog detection to get the kinect to detect multiple people and how would I do this? I am using Kinect2 with Processing3 and seems like SimpleOpenNI doesn't work for Kinect2? Thanks!

Here's the code:

 import org.openkinect.processing.*;

// Kinect Library object
Kinect2 kinect2;

//float minThresh = 480;
//float maxThresh = 830;
PImage kin;
PImage bg;

void setup() {
  size(512, 424, P3D);
  kinect2 = new Kinect2(this);
  kinect2.initDepth();
  kinect2.initDevice();
  kin = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
  bg = loadImage("1219690.jpg");
}


void draw() {
  background(0);

  //loadPixels(); 
  tint(255,254);
  image(bg,0,0);
  kin.loadPixels();

  //minThresh = map(mouseX, 0, width, 0, 4500);
  //maxThresh = map(mouseY, 0, height, 0, 4500);


  // Get the raw depth as array of integers
  int[] depth = kinect2.getRawDepth();

  //float sumX = 0;
  //float sumY = 0;
  //float totalPixels = 0;

  for (int x = 0; x < kinect2.depthWidth; x++) {
    for (int y = 0; y < kinect2.depthHeight; y++) {
      int offset = x + y * kinect2.depthWidth;
      int d = depth[offset];
//println(d);
//delay (10);
    tint(255,127);

      if (d < 500) {
        kin.pixels[offset] = color(255, 0, 0);

        //sumX += x;
        //sumY += y;
        //totalPixels++;

      } else if (d > 500 && d<1000){
        kin.pixels[offset] = color(0,255,0);
      }  else if (d >1000 && d<1500){
        kin.pixels[offset] = color(0,0,255);
      } else {
        kin.pixels[offset] = color(0);
      }
    }
  }

  kin.updatePixels();
  image(kin, 0, 0);

  //float avgX = sumX / totalPixels;
  //float avgY = sumY / totalPixels;
  //fill(150,0,255);
  //ellipse(avgX, avgY, 64, 64);

  //fill(255);
  //textSize(32);
  //text(minThresh + " " + maxThresh, 10, 64);
}

Screen Shot 2018-01-14 at 11.56.45 AM

Answers

  • edited February 2018 Answer ✓

    @Betzilla_ -- Nice question.

    Frame-for-frame, this:

    PAFF_071916_diversityscienceCPS_newsfeature

    ... is impossible for the Kinect, because it doesn't have x-ray vision. It can't see through the first person, so it has no idea where the second silhouette is, and thus can't show their overlap when person A is completely behind person B

    If you want each person to act as a paintbrush, however, and mix the paints, then that is easier -- especially if they walk across at different times.

    • Draw silhouettes from person A onto PGraphics A
    • Draw silhouettes from person B onto PGraphics B
    • Draw PGraphics A and B into the screen with e.g. tint(), or blendMode(), mixing them as two colors.

    If they walk across the screen at the same time and you are attempting to assign each a color, you either need:

    1. good motion tracking with smart continuity for two different objects
    2. skeletons and smart continuity
    3. a rule that they have to walk at different depths on the stage, and that anything at one height map or the other will be colored accordingly, drawn onto PGraphics A or B (depending on depth) then mixed.

    Notice that last method doesn't really care if it is one person, or 3, or a herd of horses. Everything seen closer than x is red, everything further is blue, and they are added up and mixed (with two PGraphics, and tint or blendMode) to make some areas (actually, most areas) that are purple.

  • edited January 2018

    Ugh so sad the picture in the beginning is exactly what I'm trying to get haha, kinda had a feeling it was impossible.

    Since they will bump into each other if they walk at the same depth, at the moment it works that they are at different depths so in my question, people who are closer are red, further is green, and furthest is blue. How would I add them up and mix with two PGraphics and tint or blendMode if say the mixed color is purple?

    Alternatively, the paintbrush method sounds like it could work too. Could you show me how to use PGraphics with my code above? I'm completely new

    Thank you Jeremy.

  • edited January 2018

    Here is a quick example of using PGraphics and tint. It is drawing white pixels onto two separate image buffers -- drawing on pgA when mouse is down, and on pgB when mouse is up -- and then using tint before rendering each buffer onto the main sketch canvas. This tinting them transparent red and blue creates purple areas where they overlap.

    // 2018-01-15 Processing 3.3.6
    // forum.processing.org/two/discussion/25959/changing-opacity-of-silhouettes
    
    PImage bg;
    PGraphics pgA;
    PGraphics pgB;
    int BRUSH = 50;
    
    void setup() {
      size(400, 400);
      ellipseMode(CENTER);
      bg = loadImage("https" + "://processing.org/img/processing3-logo.png");
      bg.resize(400,400);
      pgA = createGraphics(width, height);
      pgB = createGraphics(width, height);
    
    }
    
    void draw() {
      background(0);
      image(bg, 0, 0);
      if (mousePressed) {
        pgA.beginDraw();
        pgA.noStroke();
        pgA.ellipse(mouseX, mouseY, BRUSH, BRUSH);
        pgA.endDraw();
      } else {
        pgB.beginDraw();
        pgB.ellipse(mouseX, mouseY, BRUSH, BRUSH);
        pgB.noStroke();
        pgB.endDraw();
      }
      pushStyle();
      tint(255, 64, 64, 128); // tint red transparent
      image(pgA, 0, 0);
      tint(64, 64, 255, 128); // tint blue transparent
      image(pgB, 0, 0);
      popStyle();
    }
    

    OverlappingBrushes--screenshot

  • edited January 2018

    So I was able to change the tints, the logic is that if I can make the background of each PImage transparent, then we should be able to see laser below but it isn't working, refer to code below

  • edited January 2018

    So like here, I'm showing the kinect image twice as img and img2. But it fails to display the 2 images over each other like in PGraphics.

    import org.openkinect.processing.*;
    Kinect2 kinect2;
    
    PImage img;
    PImage img2;
    
    void setup() {
      size(512, 424, P3D);
      kinect2 = new Kinect2(this);
      kinect2.initDepth();
      kinect2.initDevice();
      img = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
      img2 = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
    }
    
    
    void draw() {
      background(0);
    
      img.loadPixels();
      img2.loadPixels();
    
    
    
      int[] depth = kinect2.getRawDepth();
    
    
      for (int x = 0; x < kinect2.depthWidth; x++) {
        for (int y = 0; y < kinect2.depthHeight; y++) {
          int offset = x + y * kinect2.depthWidth; //offset is translating grid location to array
          int d = depth[offset];
    
          if (d < 500) {
    
            img.pixels[offset] = color(255, 0, 0);
            img2.pixels[offset] = color(255, 0, 0);
          } else if (d > 500 && d<1000) {
            img.pixels[offset] = color(0, 255, 0);
            img2.pixels[offset] = color(0, 255, 0);
          } else if (d >1000 && d<1500) {
            img.pixels[offset] = color(0, 0, 255);
            img2.pixels[offset] = color(0, 0, 255);
          } else {
            img.pixels[offset] = color(0);
            img2.pixels[offset] = color(0);
          }
        }
      }
    
      img.updatePixels();
      img2.updatePixels();
    
      image(img, 0, 0);
      image(img2,0,0);
    }
    
  • Can you label what lines in your code, specifically, are supposed to involve transparency?

  • This demo shows overlapping figures... hmm... ok... I don't have a kinect so I used target 1 to be my mouse pointer and target 2 to be a static one at the botom right corner. You see target 1 always. To see target 2 overlapped with target 1, keep the muse button down. Notice that blending is an expensive operation as I can feel my PC fans ramping up when blending is being used. Notice also that it performs much better in P2D and it lags in JAVA2D and it is worse in FX2D.

    Kf

    PImage img;
    PImage img2;
    int max_dist;
    PVector target2;
    boolean blendNow=false;
    
    void setup() {
      size(512, 424,P2D);
      img = createImage(width, height, ARGB);
      img2 = createImage(width, height, ARGB);
    
      max_dist=int(sqrt(width*width+height*height));
      target2=new PVector(width*0.75, height*0.90);
    
      //noLoop();
    }
    
    
    void draw() {
      background(0);
    
      img.loadPixels();
      img2.loadPixels();
    
    
      for (int x = 0; x < width; x++) {
        for (int y = 0; y < height; y++) {
          int offset = x + y * width; //offset is translating grid location to array
    
          //TARGET #1
          int cdist = int(dist(mouseX, mouseY, x, y)); 
          int d=int(map(cdist, 0, max_dist, 0, 2000));
    
          img.pixels[offset]=paintImage(d, color(255, 0, 0, 100), color(0, 255, 0, 100), color(0, 0, 255, 100), color(0, 100));
    
          //TARGET #2
          cdist = int(dist(target2.x, target2.y, x, y)); 
          d=int(map(cdist, 0, max_dist, 0, 2000));
    
          img2.pixels[offset]=paintImage(d, color(0, 0, 255, 100), color(255, 150, 0, 100), color(255, 0, 0, 100), color(90, 100));
        }
    
        img.updatePixels();
        img2.updatePixels();
    
        //https:/ /processing.org/reference/PImage_blend_.html
        if(blendNow) 
        img.blend(img2, 0, 0, width, height, 0, 0, width, height, LIGHTEST);
        image(img, 0, 0);
      }
    
      //println("DONE",frameCount);
    }
    
    color paintImage(int d, color c1, color c2, color c3, color cDefault) {
      color retCol=cDefault;
    
      if (d < 500) {
        retCol = c1;
      } else if (d > 500 && d<1000) {
        retCol = c2;
      } else if (d >1000 && d<1500) {
        retCol = c3;
      } 
    
      return retCol;
    }
    
    
    void mousePressed() {
    
      blendNow=true;
    }
    
    void mouseReleased() {
    
      blendNow=false;
    }
    
  • @jeremydouglass I added the tint and blend and tried changing the alpha attribute and all didn't work,

    import org.openkinect.processing.*;
    Kinect2 kinect2;
    
    PImage img;
    PImage img2;
    
    void setup() {
      size(512, 424, P3D);
      kinect2 = new Kinect2(this);
      kinect2.initDepth();
      kinect2.initDevice();
      img = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
      img2 = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
    }
    
    void draw() {
      background(0);
    
      img.loadPixels();
      img2.loadPixels();
    
      int[] depth = kinect2.getRawDepth();
      for (int x = 0; x < kinect2.depthWidth; x++) {
        for (int y = 0; y < kinect2.depthHeight; y++) {
          int offset = x + y * kinect2.depthWidth; //offset is translating grid location to array
          int d = depth[offset];
    
          if (d < 500) {
    

    **** tint(255,0,0,63)**** img.pixels[offset] = color(255, 0, 0); img2.pixels[offset] = color(255, 0, 0); } else if (d > 500 && d<1000) {

            img.pixels[offset] = color(0, 255, 0);
            img2.pixels[offset] = color(0, 255, 0);
          } else if (d >1000 && d<1500) {
    
            img.pixels[offset] = color(0, 0, 255);
            img2.pixels[offset] = color(0, 0, 255);
          } else {
    
            img.pixels[offset] = color(0);
            img2.pixels[offset] = color(0);
          }
        }
      }
    
      img.updatePixels();
      img2.updatePixels();
    
      image(img, 0, 0);
      image(img2,0,0);
    
      **blend(img, 0, 0, 514, 424, 0, 0, 514, 424, MULTIPLY);**
    }
    

    @kfrajer I tried before with PGraphics to blend a moving ellipse on mouseX and mouseY and another ellipse randomly placed and that worked well so I tried to use the same method to apply to this and now it's not working, perhaps it's because Kinect can only act as one input.

  • You are right, AFAIK you are only using one external graphic buffer input which it is the one provided by kinect. In your case. You will need yo identify your subjects in the scene and draw them and when they overlap the colors will mix.... ok, now I realized this is not going to work. Using depth, you will be able to tell the two figures apart and draw them apart when they are apart. When the two figures overlap (as you know and as jeremydouglass described in detail b4) you won't be able to track both figures but only the one close to the kinect sensor.

    You could use blob detection to track the figures but still you will have the problem of distinguishing the figures when they overlap. You will be able to see only one in the overlapping state. Then you won't be able to blend the colors.

    In your last post, you don't need two PImage objects. One is enough. However, as previously described, you can paint the two silhouettes only when they are apart. Jeremydouglass example using his demo works because he is drawing one trace first and then a second trace later and then he tints each trace with a different call to tint. He knows which one is blue and which one is red because he has data from each trace independently of each other (similar to my demo). You need to think about what you want to do and if it is possible to accomplish it at all with this technology and this current setup.

    Kf

  • Exactly -- if two people walk across the stage at different times this would work -- you have complete information on each without needing "x-ray vision."

    A complex workaround might be to use two or more depth cameras at different angles to deal with occlusion:

    However this kind of setup could get tricky depending on the depth sensor type(s) --you don't want multiple dot projectors to be interfering with each other.

  • Thank you guys for being so helpful. I decided to try using my laptop camera as a second camera and so I can install these separately. Then I played with the filter() to get some cool effects. What do you guys think? The whole idea is to create a place where people walk by and see others also through the mixing of their silhouettes. Screen Shot 2018-01-23 at 1.53.40 PM

    import org.openkinect.processing.*;
    import processing.video.*;
    
    Kinect2 kinect2;
    Capture video;
    
    PImage img;
    
    void setup() {
      size(512, 424, P3D);
      kinect2 = new Kinect2(this);
      kinect2.initDepth();
      kinect2.initDevice();
      img = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
    
      video = new Capture(this, 512, 424);
      video.start();
    }
    
    void captureEvent(Capture video) {
      // Step 4. Read the image from the camera.
      video.read();
    }
    
    
    void draw() {
      background(0);
      img.loadPixels();
      noTint();
      int[] depth = kinect2.getRawDepth();
      for (int x = 0; x < kinect2.depthWidth; x++) {
        for (int y = 0; y < kinect2.depthHeight; y++) {
          int offset = x + y * kinect2.depthWidth; //offset is translating grid location to array
          int d = depth[offset];
    
           if (d < 500) {
            img.pixels[offset] = color(255, 0, 0);
          } else if (d > 500 && d<1000) {
            img.pixels[offset] = color(0, 255, 0);
          } else if (d >1000 && d<1500) {
            img.pixels[offset] = color(0, 0, 255);
          } else {
            img.pixels[offset] = color(0);
          }
        }
    
        //  if (d < 500) {
        //    tint(255, 0, 0, 63);
        //    img.pixels[offset] = color(255, 0, 0);
        //  } else if (d > 500 && d<1000) {
    
        //    img.pixels[offset] = color(0, 255, 0);
        //  } else if (d >1000 && d<1500) {
    
        //    img.pixels[offset] = color(0, 0, 255);
        //  } else {
    
        //    img.pixels[offset] = color(0);
        //  }
        //}
      }
    
      img.updatePixels();
    
      image(img, 0, 0);
    
      tint(255, 0, 0, 127);
      image(video, 0, 0);
      //filter(POSTERIZE, 4);
      filter(INVERT);
    }
    
  • Fun! Glad that after shifting the approach you are getting good results that are related to the visual effect you were trying to achieve.

  • Now I just gotta mix the colors more to make it look like they are mixing but 2 people walking in front of the kinect still won't mix their silhouette colors.

  • edited January 2018

    Well, there is another approach if you really want that effect, really only have one camera, and don't care too much about the physical interface.

    1. Split the camera field into two halves
    2. Draw those two halves of the output on top of each other, as per my demo sketch.

    Now people can stand next to each other, the camera can see each silhouette, and the output shows the two silhouettes overlapping.

  • Hmm I wanted them to overlap when they cross in real physical space but what you suggested could work if the idea changes a little

  • @Betzilla_ -- re: "when they cross in real physical space"

    Well, see my first comment about x-ray vision. If the "crossing" you are talking about is from the point of view of a single camera, you can't have both a conventional camera with single line of sight and show real overlapping because: physics. Keep in mind that, for a audience or for the participants, since the they aren't standing where the camera is (the camera is there!) they would each experience this "crossing" differently.

    If you conceptualize "crossing" from a particular fixed point of view, then you need multiple cameras (for example, one camera in front and one camera behind) or a different sensing technique (for example, non-line-of-sight sensors).

    If you can't do multiple cameras, then you need to do "crossing" differently -- such as a top-down camera that tries to reconstruct silhouettes seen at an extreme angle, or a single camera that is superimposing two views into one view.

    The alternative would be to interpolate what is occluded -- for example, model a skeleton, hang a silhouette on it, and then guess where the parts of the silhouette you can't see. The Kinect can already do this to an extent, but this kind of approach gets really, really hard.

  • So what I'm thinking with the limit I have now is that since I got the 2 cameras, each could be positioned at different locations and each on a different "layer" on the projected display and they "overlap" creating a short of silhouette overlapping effect. I'm working on another project at the moment but will be back on this again soon and I'll post updates. Thanks Jeremy!

  • Do you know how I can split the kinect camera field into 2 halves?

  • Well, in your code you loop through the pixels here:

    for (int x = 0; x < kinect2.depthWidth; x++) {
        for (int y = 0; y < kinect2.depthHeight; y++) {
    

    ...you could just loop through them like this to do the left half:

    for (int x = 0; x < kinect2.depthWidth/2; x++) {
        for (int y = 0; y < kinect2.depthHeight; y++) {
    

    ...and then do the right half separately. Or the top and bottom -- just loop over different ranges.

Sign In or Register to comment.