<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with getrawdepth() - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=getrawdepth%28%29</link>
      <pubDate>Sun, 08 Aug 2021 18:57:03 +0000</pubDate>
         <description>Tagged with getrawdepth() - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/taggedgetrawdepth%28%29/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>How can I get sound to fade in and out depending on your location?</title>
      <link>https://forum.processing.org/two/discussion/27718/how-can-i-get-sound-to-fade-in-and-out-depending-on-your-location</link>
      <pubDate>Sat, 07 Apr 2018 19:15:18 +0000</pubDate>
      <dc:creator>karinalopez87</dc:creator>
      <guid isPermaLink="false">27718@/two/discussions</guid>
      <description><![CDATA[<p>Hello, I was able to make my sound play and stop with Kinect but it doesn't loop or fade out. I want my sound to continue playing with the video and just fade in if the interaction is activated and fade out when the interaction is no longer happening. Also, I want my sound to loop.</p>

<pre><code>     import processing.sound.*;
     import org.openkinect.processing.*;
     import processing.video.*;


    Movie vid;
    Movie vid1;
    SoundFile sound1;
    SoundFile sound2;
    Kinect2 kinect2;

    //PImage depthImg;
    //PImage img1;

    //pixel
    int minDepth=0;
    int maxDepth=4500; //4.5m

    boolean off = false;

    void setup() {
      size(1920,1080);
      //fullScreen();
      vid = new Movie(this, "test_1.1.mp4");
      vid1 = new Movie(this, "test_1.1.mp4");
      sound1 = new SoundFile(this, "cosmos.mp3");
      sound2 = new SoundFile(this, "NosajThing_Distance.mp3");

      //MOVIE FILES
          //01.MOV
          //03.MOV
          //02.mov (File's too big)
          //Urban Streams.mp4
          //HiddenNumbers_KarinaLopez.mov
          //test_w-sound.mp4
          //test_1.1.mp4
          //test005.mov
      //SOUND FILES      
          //cosmos.mp3
          //NosajThing_Distance.mp3

      vid.loop();
      vid1.loop();
      kinect2 = new Kinect2(this);
      kinect2.initDepth();
      kinect2.initDevice();
    //depthImg = new PImage(kinect2.depthWidth, kinect2.depthHeight);
    //img1 = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
    }

    void movieEvent(Movie vid){
      vid.read();
      vid1.read();
    }


    void draw() { 
      vid.loadPixels();
      vid1.loadPixels();

      //image(kinect2.getDepthImage(), 0, 0);

        int[] depth = kinect2.getRawDepth();

      float sumX=0;
      float sumY=0;
      float totalPixels=0;

        for (int x = 0; x &lt; kinect2.depthWidth; x++){
          for (int y = 0; y &lt; kinect2.depthHeight; y++){
            int offset = x + y * kinect2.depthWidth;
            int d = depth[offset];

            if ( d &gt; 0 &amp;&amp; d &lt; 1000){
          //    //video.pixels[offset] = color(255, 100, 15);
          sumX +=x;
          sumY+=y;
          totalPixels++;
             brightness(0);
            } else {
          //    //video.pixels[offset] = color(150, 250, 180);
              brightness(255);
            }      }
        }
    vid.updatePixels();
    vid1.updatePixels();

    float avgX = sumX/totalPixels;
    float avgY=sumY/totalPixels;


    //VID 01 - Screen 01
    if (avgX&gt;300 &amp;&amp; avgX&lt;500){
    tint(255, (avgX)/2);
    image(vid1, 1920/2, 0);
    if(sound2.isPlaying()==0){
    sound2.play(0.5);
    sound2.amp(0.5);
    }
    }else{
    tint(0, (avgX)/2);
    image(vid1, 1920/2, 0);
    if(sound2.isPlaying()==1){
     delay(1);
    //IT DIMS THE VOLUME TO 0 BUT IT DOESN'T GO BACK TO VOLUME 0.5 [sound2.amp(0.5);]
     sound2.amp(0);
     }
    }
     //VID 02 - Screen 01
     if (avgX&gt;50 &amp;&amp; avgX&lt;200){
    tint(255, (avgX)/3);
    image(vid, 0-(1920/2), 0);
    }else{
       tint(0, (avgX)/3);
       image(vid, 0-(1920/2), 0);
     }
    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>My sketch runs slow when outputting the OSC to Max, im using kinect RGB and Depth cam</title>
      <link>https://forum.processing.org/two/discussion/27133/my-sketch-runs-slow-when-outputting-the-osc-to-max-im-using-kinect-rgb-and-depth-cam</link>
      <pubDate>Fri, 23 Mar 2018 19:15:37 +0000</pubDate>
      <dc:creator>kieran</dc:creator>
      <guid isPermaLink="false">27133@/two/discussions</guid>
      <description><![CDATA[<p>Im attempting to build a system for hand tracking using the depth camera and color tracking, and output the values to MaxMsp where I can use them for audio mappings for a college project. The sketch runs fine when I comment out my OSC send code but when sending the OSC, the rgb and depth stop, changing frame every thirty seconds or so.</p>

<pre><code>//kinect
import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
//osc
import oscP5.*;
import netP5.*;


Kinect kinect;
OscP5 oscP5;
NetAddress myRemoteLocation;

PImage cam; 

//Color
color trackColor;
color trackColor2;
float threshold = 150;

void setup() {
  size(640, 480, P2D);
  kinect = new Kinect(this);
  kinect.initVideo();
  kinect.initDepth();
  background(255);

  //Color
  trackColor = color(255, 0, 0);
  trackColor2 = color(0, 255, 0);

  //osc
  myRemoteLocation = new NetAddress("127.0.0.1", 8000);
  oscP5 = new OscP5(this, 8000);
}



void draw() {
  image(kinect.getVideoImage(), 0, 0);
  PImage cam = kinect.getVideoImage();
  int[] depth = kinect.getRawDepth();




  float avgX1 = 0;
  float avgY1 = 0;
  float avgX2 = 0;
  float avgY2 = 0;

  int count1 = 0;
  int count2 = 0;


  for (int x = 0; x &lt; kinect.width; x++ ) {
    for (int y = 0; y &lt; kinect.height; y++ ) {
      int loc = x + y * kinect.width;
      // What is current color
      color currentColor = cam.pixels[loc];
      float rc = red(currentColor);
      float gc = green(currentColor);
      float bc = blue(currentColor);

      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float r3 = red(trackColor2);
      float g3 = green(trackColor2);
      float b3 = blue(trackColor2);

      float d = distSq(rc, gc, bc, r2, g2, b2);
      float e = distSq(rc, gc, bc, r3, g3, b3);

      if (d &lt; threshold*threshold) {
        stroke(255);
        strokeWeight(1);
        point(x, y);
        avgX1 += x;
        avgY1 += y;
        count1++;
      }
      else if (e &lt; threshold*threshold) {
        stroke(255);
        strokeWeight(1);
        point(x, y);
        avgX2 += x;
        avgY2 += y;
        count2++;
      }

    }
  }




  if (count1 &gt; 0) { 
    avgX1 = avgX1 / count1;
    avgY1 = avgY1 / count1;

    fill(255, 0, 0);
    strokeWeight(4.0);
    stroke(0);
    ellipse(avgX1, avgY1, 24, 24);
  }
   if (count2 &gt; 0) { 
    avgX2 = avgX2 / count2;
    avgY2 = avgY2 / count2;

    //green
    fill(0, 255, 0);
    strokeWeight(4.0);
    stroke(0);
    ellipse(avgX2, avgY2, 24, 24);
  }


  //DEPTH
    for (int x = 0; x &lt; 640; x++) {
    for (int y = 0; y &lt; 480; y++) {
      int offset = x + y * kinect.width;
      int dpth = depth[offset];

      if (!(dpth &gt; 300 &amp;&amp; dpth &lt; 700)) {        
        cam.pixels[offset] = color(0);
      } else if (dpth &gt; 300 &amp;&amp; dpth &lt; 700) {
      /*
        //OSC LEFT
        OscMessage leftXpos = new OscMessage("/leftXpos");
        OscMessage leftYpos = new OscMessage("/leftYpos");

        leftXpos.add(avgX1);
        leftYpos.add(avgY1);

        //OSC RIGHT
        OscMessage rightXpos = new OscMessage("/rightXpos");
        OscMessage rightYpos = new OscMessage("/rightYpos");

        rightXpos.add(avgX2);
        rightYpos.add(avgY2);

        oscP5.send(leftXpos, myRemoteLocation);
        oscP5.send(leftYpos, myRemoteLocation);

        oscP5.send(rightXpos, myRemoteLocation);
        oscP5.send(rightYpos, myRemoteLocation);
       */
      }

    }
  }



}

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}  
</code></pre>
]]></description>
   </item>
   <item>
      <title>Kinect and .mov files</title>
      <link>https://forum.processing.org/two/discussion/26694/kinect-and-mov-files</link>
      <pubDate>Wed, 07 Mar 2018 15:27:41 +0000</pubDate>
      <dc:creator>karinalopez87</dc:creator>
      <guid isPermaLink="false">26694@/two/discussions</guid>
      <description><![CDATA[<p>I'm having trouble controlling the .mov file's tint with rawdepth from kinect</p>

<pre><code> import org.openkinect.processing.*;
 import processing.video.*;
  Movie video;
  Kinect2 kinect2;
 int minDepth=0;
  int maxDepth=4500; //4.5m
  void setup() {
  size(1920,1080);
  video = new Movie(this, "final-02.mov");
  video.loop();
  kinect2 = new Kinect2(this);
  kinect2.initDepth();
  kinect2.initDevice();
  }
  void movieEvent(Movie video){
 video.read();
  }
  void draw() { 
  image(video, 0, 0);
  video.loadPixels();

int[] depth = kinect2.getRawDepth();

for (int x = 0; x &lt; kinect2.depthWidth; x++){
  for (int y = 0; y &lt; kinect2.depthHeight; y++){
    int offset = x + y * kinect2.depthWidth;
    int d = depth[offset];


    if (d &gt; 10 &amp;&amp; d &lt; 400){
      //video.pixels[offset] = color(255, 100, 15);
      tint(10,255);
    } else {
      //video.pixels[offset] = color(150, 250, 180);
      tint(250,10);
    }
  }
  println(x);
}

  video.updatePixels();
  image(video,0,0);
  }
</code></pre>
]]></description>
   </item>
   <item>
      <title>Changing opacity of silhouettes</title>
      <link>https://forum.processing.org/two/discussion/25959/changing-opacity-of-silhouettes</link>
      <pubDate>Sun, 14 Jan 2018 16:45:28 +0000</pubDate>
      <dc:creator>Betzilla_</dc:creator>
      <guid isPermaLink="false">25959@/two/discussions</guid>
      <description><![CDATA[<p>Hi,
Using Daniel Shiffmana's MinMaxThreshold tutorial, I was able to change the colour from red to blue to green based on their distance to the Kinect. I would like to make a wall where when 2 people walk past each other, their silhouette colours mix. I tried to play with opacity with a background image but wouldn't mix 2 different silhouettes detected by kinect. Should I use blog detection to get the kinect to detect multiple people and how would I do this? I am using Kinect2 with Processing3 and seems like SimpleOpenNI doesn't work for Kinect2?
Thanks!</p>

<p>Here's the code:</p>

<pre><code> import org.openkinect.processing.*;

// Kinect Library object
Kinect2 kinect2;

//float minThresh = 480;
//float maxThresh = 830;
PImage kin;
PImage bg;

void setup() {
  size(512, 424, P3D);
  kinect2 = new Kinect2(this);
  kinect2.initDepth();
  kinect2.initDevice();
  kin = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
  bg = loadImage("1219690.jpg");
}


void draw() {
  background(0);

  //loadPixels(); 
  tint(255,254);
  image(bg,0,0);
  kin.loadPixels();

  //minThresh = map(mouseX, 0, width, 0, 4500);
  //maxThresh = map(mouseY, 0, height, 0, 4500);


  // Get the raw depth as array of integers
  int[] depth = kinect2.getRawDepth();

  //float sumX = 0;
  //float sumY = 0;
  //float totalPixels = 0;

  for (int x = 0; x &lt; kinect2.depthWidth; x++) {
    for (int y = 0; y &lt; kinect2.depthHeight; y++) {
      int offset = x + y * kinect2.depthWidth;
      int d = depth[offset];
//println(d);
//delay (10);
    tint(255,127);

      if (d &lt; 500) {
        kin.pixels[offset] = color(255, 0, 0);

        //sumX += x;
        //sumY += y;
        //totalPixels++;

      } else if (d &gt; 500 &amp;&amp; d&lt;1000){
        kin.pixels[offset] = color(0,255,0);
      }  else if (d &gt;1000 &amp;&amp; d&lt;1500){
        kin.pixels[offset] = color(0,0,255);
      } else {
        kin.pixels[offset] = color(0);
      }
    }
  }

  kin.updatePixels();
  image(kin, 0, 0);

  //float avgX = sumX / totalPixels;
  //float avgY = sumY / totalPixels;
  //fill(150,0,255);
  //ellipse(avgX, avgY, 64, 64);

  //fill(255);
  //textSize(32);
  //text(minThresh + " " + maxThresh, 10, 64);
}
</code></pre>

<p><img src="https://forum.processing.org/two/uploads/imageupload/510/T5JSOU5S33JJ.png" alt="Screen Shot 2018-01-14 at 11.56.45 AM" title="Screen Shot 2018-01-14 at 11.56.45 AM" /></p>
]]></description>
   </item>
   <item>
      <title>How can I stream my Kinect feed from one computer to another with OSC?</title>
      <link>https://forum.processing.org/two/discussion/25137/how-can-i-stream-my-kinect-feed-from-one-computer-to-another-with-osc</link>
      <pubDate>Wed, 22 Nov 2017 02:57:18 +0000</pubDate>
      <dc:creator>KMat</dc:creator>
      <guid isPermaLink="false">25137@/two/discussions</guid>
      <description><![CDATA[<p>Hi all, this is my first post here</p>

<p>I am trying to feed my slightly tweaked PointCloud example from Open Kinect v1 library from one computer to another (for example sake, from one processing file to be displayed in another) through the OSC library.</p>

<p>I have attempted myself, but was unable to send the depth data properly (at all) and I am just lost as to where to start and how to even send the data to be displayed.</p>

<p>Here is my Kinect code (without any OSC), can anyone help or guide me through what to send/receive? 
(I'm on Mac OS 10.13 and Processing 3.0.1)</p>

<pre><code>import org.openkinect.freenect.*;
import org.openkinect.processing.*;

// Kinect Library object
Kinect kinect;

// Angle for rotation
float a = 0;

// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[750];

void setup() {
  // Rendering in P3D
  size(800, 600, P3D);
  kinect = new Kinect(this);
  kinect.initDepth();

  // Lookup table for all possible depth values (0 - 2047)
  for (int i = 0; i &lt; depthLookUp.length; i++) {
    depthLookUp[i] = rawDepthToMeters(i);
  }
}

void draw() {

  background(0);

  // Get the raw depth as array of integers
  int[] depth = kinect.getRawDepth();

  // We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
  int skip = 4; //

  // Translate and rotate
  translate(width/2, height/2, 300); //dot distance
  //rotateY(a);

  for (int x = 0; x &lt; kinect.width; x += skip) {
    for (int y = 0; y &lt; kinect.height; y += skip) {
      int offset = x + y*kinect.width;

      // Convert kinect data to world xyz coordinate
      int rawDepth = depth[offset];
      PVector v = depthToWorld(x, y, rawDepth);

      stroke(255, 0, 0);
      pushMatrix();
      float factor = 400; //overall Scale
      translate(v.x*factor, v.y*factor, factor-v.z*factor);
      // Draw a point
      point(0, 0);
      //line(0,0,2,2);
      popMatrix();
    }
  }

  // Rotate
  //a += 0.015f;
}

// These functions come from: <a href="http://graphics.stanford.edu/~mdfisher/Kinect.html" target="_blank" rel="nofollow">http://graphics.stanford.edu/~mdfisher/Kinect.html</a>
float rawDepthToMeters(int depthValue) {
  if (depthValue &lt; 750) {
    return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
  }
  return 0.0f;
}

PVector depthToWorld(int x, int y, int depthValue) {

  final double fx_d = 1.0 / 5.9421434211923247e+02;
  final double fy_d = 1.0 / 5.9104053696870778e+02;
  final double cx_d = 3.3930780975300314e+02;
  final double cy_d = 2.4273913761751615e+02;

  PVector result = new PVector();
  double depth =  rawDepthToMeters(depthValue);
  result.x = (float)((x - cx_d) * depth * fx_d);
  result.y = (float)((y - cy_d) * depth * fy_d);
  result.z = (float)(depth);
  return result;
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Real time typography with kinect or webcam</title>
      <link>https://forum.processing.org/two/discussion/22001/real-time-typography-with-kinect-or-webcam</link>
      <pubDate>Fri, 14 Apr 2017 20:33:14 +0000</pubDate>
      <dc:creator>olieast</dc:creator>
      <guid isPermaLink="false">22001@/two/discussions</guid>
      <description><![CDATA[<p>Hey everyone. I'm really new to processing so excuse me if this question sounds vague. Basically - I am trying to create a real-time typographic piece similar to the one below using either a Kinect camera or a webcam. I just don't know where to start. The other example feels similar to Shiffman's pointcloud example. I have attempted to implement text into his example but unfortunately I don't really know what I'm doing!</p>

<p>Could someone please help me out!</p>

<p>Thank you</p>

<p><span class="VideoWrap"><span class="Video YouTube" id="youtube-V2wDl_takes"><span class="VideoPreview"><a href="http://youtube.com/watch?v=V2wDl_takes"><img src="http://img.youtube.com/vi/V2wDl_takes/0.jpg" width="640" height="385" border="0" /></a></span><span class="VideoPlayer"></span></span></span></p>

<p><span class="VideoWrap"><span class="Video YouTube" id="youtube-gffiPYd6K6s"><span class="VideoPreview"><a href="http://youtube.com/watch?v=gffiPYd6K6s"><img src="http://img.youtube.com/vi/gffiPYd6K6s/0.jpg" width="640" height="385" border="0" /></a></span><span class="VideoPlayer"></span></span></span></p>
]]></description>
   </item>
   <item>
      <title>How to mirror kinect depth image with dLib-freenect ?</title>
      <link>https://forum.processing.org/two/discussion/19304/how-to-mirror-kinect-depth-image-with-dlib-freenect</link>
      <pubDate>Sat, 26 Nov 2016 11:00:19 +0000</pubDate>
      <dc:creator>chambefort</dc:creator>
      <guid isPermaLink="false">19304@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>I'm working with dLib-freenect (<a href="https://github.com/diwi/dLibs" target="_blank" rel="nofollow">https://github.com/diwi/dLibs</a>) to use kinect v1 with PC and Processing 3. The mirror method used with openKinect doesn't work : <code>kinect_.enableMirror(true);</code>
Same problem with <code>kinect_.set Mirror(true);</code>
There is no mirror method with dLib-freenect.</p>

<p>So I tried something like that :</p>

<pre><code>int[] rawDepth = kinect_depth_.getRawDepth();
  for(int i =0; i &lt; kinectFrame_size_x; i++) {
    for(int j =0; j &lt; kinectFrame_size_y; j++) {
    //int offset = i+j*kinectFrame_size_x;
//I try to flip the depth image
    int offset = (kinectFrame_size_x-i-1)+j*kinectFrame_size_x;
    int d = rawDepth[offset];
     ...  } }
</code></pre>

<p>but the image is not flipped.
Any idea ?</p>

<p>Thanks a lot !</p>
]]></description>
   </item>
   </channel>
</rss>