<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with captureevent() - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=captureevent%28%29</link>
      <pubDate>Sun, 08 Aug 2021 14:52:11 +0000</pubDate>
         <description>Tagged with captureevent() - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/taggedcaptureevent%28%29/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>Static Noise With Video</title>
      <link>https://forum.processing.org/two/discussion/27377/static-noise-with-video</link>
      <pubDate>Mon, 26 Mar 2018 17:25:25 +0000</pubDate>
      <dc:creator>clintonva32</dc:creator>
      <guid isPermaLink="false">27377@/two/discussions</guid>
      <description><![CDATA[<p>Hey processing world!</p>

<p>Still very new at this so I apologize in advance.  I took a few different codes from online and I want to combine them together to make my camera on my laptop control the static.  I pretty much want the shadow of a human figure or object to be seen underneath (or on top of the static).  Anytime I do this however, I cant figure out how to get the data to read with a static image that's moving.</p>

<pre><code>import processing.video.*;
Capture video;

int videoScale=10;
void setup() {

  frameRate(10);
  size(512, 512); 
  video = new Capture(this);  
  video.start();
}
void captureEvent(Capture video) {
  video.read();
}
void draw() {
  background(0);
  video.loadPixels();  

  loadPixels();
  for (int x = 0; x &lt; width; x++) {
    for (int y = 0; y &lt; height; y++) {
      float randomValue = random(255)*videoScale;
      pixels[x+y*width] = color(randomValue, randomValue, randomValue);

  }
  updatePixels();
  }}
</code></pre>
]]></description>
   </item>
   <item>
      <title>using PGraphics</title>
      <link>https://forum.processing.org/two/discussion/27734/using-pgraphics</link>
      <pubDate>Tue, 10 Apr 2018 11:31:13 +0000</pubDate>
      <dc:creator>james0411</dc:creator>
      <guid isPermaLink="false">27734@/two/discussions</guid>
      <description><![CDATA[<p>Hi there, I have problems with my code. what I want to do is tracking color pixel. I want to capture an image with the only traced color(without background video). so I used PGraphics, but It is so slow that the program can't track color well. I think using PGraphics makes the program slow. 
is there any advice to make tracking faster?</p>

<p>Here is my code.</p>

<pre><code>import processing.video.*;

Capture video;

color trackColor= color(145,43,54);
float threshold = 25;
PGraphics topLayer;


float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}

void setup() {
  frameRate(60);
  size(640, 360);
  String[] cameras = Capture.list();
  printArray(cameras);
  video = new Capture(this, cameras[3]);
  video.start();
  trackColor = color(145,43,54);
  topLayer = createGraphics(width, height, g.getClass().getName());
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {

  video.loadPixels();
  image(video, 0, 0);

  //threshold = map(mouseX, 0, width, 0, 100);
  threshold = 50;

  int avgX = 0;
  int avgY = 0;
  float worldRecord = 0;

  // Begin loop to walk through every pixel
  for (int x = 0; x &lt; video.width; x++ ) {
    for (int y = 0; y &lt; video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);
      //
      float d = distSq(r1, g1, b1, r2, g2, b2); 
      if (d &lt; threshold*threshold) {
        worldRecord = d;
        avgX = x;
        avgY = y;
    }
  }
 }
   topLayer.beginDraw();
  topLayer.stroke(trackColor);
  if (worldRecord &lt; threshold*threshold) {
    topLayer.fill(trackColor);
    topLayer.strokeWeight(1);
    topLayer.point(avgX, avgY); ////////////////////////////////////////
  }
  topLayer.endDraw();
    image(topLayer, 0, 0);
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>PGraphics Background Color</title>
      <link>https://forum.processing.org/two/discussion/27754/pgraphics-background-color</link>
      <pubDate>Fri, 13 Apr 2018 06:23:15 +0000</pubDate>
      <dc:creator>james0411</dc:creator>
      <guid isPermaLink="false">27754@/two/discussions</guid>
      <description><![CDATA[<p>Hi there, when I save PGraphics to jpg file, the background color is always Black. I've tried  ####.background(255)  like this to change the background color. But I'm using my canvas with live Video. So the outcome was not what I want. How can I change the background Color when I save PGraphics?  Is there any way to change it?</p>

<p>Here is my Code.</p>

<pre><code>import processing.video.*;

Capture video;

color trackColor = color(145,43,54);
color trackColor2 = color(142,156,42);
color trackColor3 = color(18,35,40);

float threshold = 25;
PGraphics topLayer;

float r2;
float g2;
float b2;

float r3;
float g3;
float b3;

float r4;
float g4;
float b4;

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;

}
float distSq2(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d2 = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d2;

}
float distSq3(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d3 = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d3;

}

void setup() {
  size(640, 360);
  String[] cameras = Capture.list();
  printArray(cameras);
  video = new Capture(this, cameras[3]);
  video.start();
  trackColor = color(145,43,54);
  trackColor2 = color(142,156,42);
  trackColor3 = color(18,35,40);

  topLayer = createGraphics(width, height, g.getClass().getName());
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {

  video.loadPixels();
  image(video, 0, 0);

  //threshold = map(mouseX, 0, width, 0, 100);
  threshold = 25;
  float thresholdSquare = threshold * threshold;

  // Begin loop to walk through every pixel

  topLayer.beginDraw();
  topLayer.clear();
  for (int x = 0; x &lt; video.width; x++ ) {
    for (int y = 0; y &lt; video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];

      float r1 = (currentColor&gt;&gt;16)&amp;0xFF;
      float g1 = (currentColor&gt;&gt;7) &amp; 0xFF ;
      float b1 = (currentColor) &amp; 0xFF;

      r2 = (trackColor&gt;&gt;16)&amp;0xFF;
      g2 = (trackColor&gt;&gt;7) &amp; 0xFF;
      b2 = (trackColor) &amp; 0xFF;

      r3 = (trackColor2&gt;&gt;16)&amp;0xFF;
      g3 = (trackColor2&gt;&gt;7) &amp; 0xFF;
      b3 = (trackColor2) &amp; 0xFF;

      r4 = (trackColor3&gt;&gt;16)&amp;0xFF;
      g4 = (trackColor3&gt;&gt;7) &amp; 0xFF;
      b4 = (trackColor3) &amp; 0xFF;
      //
      float d = distSq(r1, g1, b1, r2, g2, b2);
      float d2 = distSq2(r1, g1, b1, r3, g3, b3);
      float d3 = distSq3(r1, g1, b1, r4, g4, b4);

      if (d &lt; thresholdSquare) {
        topLayer.stroke(trackColor);
        topLayer.fill(trackColor);
        topLayer.ellipse(x, y,2,2);
      }
      else if( d2 &lt; thresholdSquare){  
        topLayer.stroke(trackColor2);
        topLayer.fill(trackColor2);
        topLayer.ellipse(x,y,2,2);
      }
      else if(d3 &lt; thresholdSquare){
        topLayer.stroke(trackColor3);
        topLayer.fill(trackColor3);
        topLayer.ellipse(x,y,2,2);
      }
      }
     ////////////////////////////////////////
      }
        topLayer.endDraw(); 
        image(topLayer, 0, 0);

    if(mousePressed){
      topLayer.save("example.jpg");
    }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to change color of overlapping ellipses</title>
      <link>https://forum.processing.org/two/discussion/26467/how-to-change-color-of-overlapping-ellipses</link>
      <pubDate>Wed, 21 Feb 2018 14:22:01 +0000</pubDate>
      <dc:creator>lottereynaers</dc:creator>
      <guid isPermaLink="false">26467@/two/discussions</guid>
      <description><![CDATA[<p>Hi! I have a question. I am fairly new to processing, so I need a little help.</p>

<p>I am trying to track the visitors of an exhibition. Right now I am comparing each frame with the previous one, detecting changes in pixels. Each change is visualized with an ellipse with opacity 10. The overlapping ellipses create a darker area, of course, but I would like to have the color of the ellipses (or overlaps of the ellipses) change when they have overlapped a few times (let's say 10). I would like to have a heat map effect.</p>

<p>This is my code right now:</p>

<pre><code>import processing.video.*;
Capture video;

float threshold = 30;
int vidw = 640;// video width zie lijst println
int vidh = 420;// video height
int step = 2; //om de hoeveel pixels scannen op beweging
PImage prevFrame;



void setup() {
  size(displayWidth, displayHeight);//P3D
  //video = new Capture(this, "name=Logitech HD Pro Webcam C920,size=640x480,fps=30");
  video = new Capture(this, 640, 420);
  video.start();
  noCursor();
  prevFrame = createImage(vidw, vidh, RGB);
  //frameRate(12);
  smooth();
  noStroke();
  background (#FFFFFF);
}

void captureEvent(Capture video) {
  prevFrame.copy(video, 0, 0, vidw, vidh, 0, 0, vidw, vidh); // Before we read the new frame, we always save the previous frame for comparison!
  prevFrame.updatePixels();  // Read image from the camera
  video.read();
}


void draw() {
  //background (0);
  //image(video, 0, 0, width, height);
  //println(vidw + " - " + vidh);
  //if (millis()&gt;2000) {
  coordinates();
  //}
}


void coordinates() {
  loadPixels();
  video.loadPixels();
  prevFrame.loadPixels();
  // Begin loop to walk through every pixel
  for (int i = 0; i &lt; vidw; i +=step ) {
    for (int j = 0; j &lt; vidh; j +=step) {
      int loc = i + j*vidw;                   // Step 1, what is the 1D pixel location
      color current = video.pixels[loc];      // Step 2, what is the current color
      color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color
      float r1 = red(current);
      float g1 = green(current);
      float b1 = blue(current);
      float r2 = red(previous);
      float g2 = green(previous);
      float b2 = blue(previous);
      float diff = dist(r1, g1, b1, r2, g2, b2);
      if (diff &gt; threshold) {
        fill(#5882FA, 10);
        ellipse(i*step,j*step,10,10);
        }
    }
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Copy pixel[] and overlay</title>
      <link>https://forum.processing.org/two/discussion/26241/copy-pixel-and-overlay</link>
      <pubDate>Mon, 05 Feb 2018 01:09:50 +0000</pubDate>
      <dc:creator>MforMatt</dc:creator>
      <guid isPermaLink="false">26241@/two/discussions</guid>
      <description><![CDATA[<p><img src="https://forum.processing.org/two/uploads/imageupload/236/ZE7GTQUNOB4F.png" alt="Screen Shot 2018-02-05 at 1.16.36 PM" title="Screen Shot 2018-02-05 at 1.16.36 PM" /></p>

<p>I'm trying to create an effect like the image above which I have done by crudely exporting a frame then loading it and blending it - which isn't very memory/processor efficient. I'm wanting to know what the best way producing this effect would be with the pixel array. ie - how to store the pixels[] of a single frame.</p>

<p>thanks.</p>

<pre><code>import processing.video.*;


PImage test, test2;
float FPS;
int count;
int mod;
int rand;


Capture video;

void setup() {
  size(640, 420, P2D);
  video = new Capture(this, 640, 360, 30);
  video.start();

}

void captureEvent(Capture video) {
  video.read();
}

void draw() {
  FPS = frameRate;
  count = ceil(millis()/1000);
  mod = count % 5;

  background(245, 238, 96);

  image(video, 0, 0, 640, 420);

  loadPixels();
  for (int i = 0; i &lt; pixels.length; i++) {

    float b = brightness(pixels[i]);
    if (b &gt; 97) {
      pixels[i] = color (255);
    } else {
      pixels[i] = color (245, 238, 96);
    }
  }

  updatePixels();
  textSize(16);
  fill(0);
  text(FPS, 50, 50);
  text(count, 50, 75);
  text(mod, 50, 100);
  text(mouseX, 50, 125);


}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How in processing can I get it to choose one of two different codes for webcam?</title>
      <link>https://forum.processing.org/two/discussion/26230/how-in-processing-can-i-get-it-to-choose-one-of-two-different-codes-for-webcam</link>
      <pubDate>Sat, 03 Feb 2018 14:24:29 +0000</pubDate>
      <dc:creator>dazedbambi</dc:creator>
      <guid isPermaLink="false">26230@/two/discussions</guid>
      <description><![CDATA[<p>**Hello, so i'm fairly new to processing and wanted to incorporate more than one function when the webcam is on. I have some examples of code I'll use but I wondered if there is a way for processing to randomly choose whether to use one example or the next? I've tried looking at functions but I don't seem to be getting anywhere.
**</p>

<pre><code>// Example one 

import processing.video.*; 
Capture video;

PImage img1;
int w=640, h=480;

boolean bright = true;
boolean greyScale;
int shiftAmount = 4;
int grid = 1;
void setup() {
  size(640, 480);
  video = new Capture(this, 640, 480); 
  video.start();
}

void draw() { 
  loadPixels(); // Fills pixelarray
  float mouseMap = (int) map(mouseX, 0, width, 0, 255*3); // Brightness threshold mapped to mouse coordinates

if(shiftAmount &gt; 45 || shiftAmount &lt; 0){shiftAmount = 0;};

  for (int y = 0; y&lt; h; y++)
  {
    for (int x = 0; x&lt; w; x++)
    {
      color c = video.pixels[y*video.width+x]; 

      int a = (c &gt;&gt; 150) &amp; 0xFF;
      int r = (c &gt;&gt; 12) &amp; 0xFF;  
      int g = (c &gt;&gt; 2) &amp; 0xFF;  
      int b = c &amp; 0xFF; 

      if (y %grid == 0) {

        if (bright)
        {
          if (r+g+b &gt; mouseMap) {
            pixels[y*w+x] = c &lt;&lt; shiftAmount; // Bit-shift based on shift amount
          }
        }

        if (!bright)
        {
          if (r+g+b &lt; mouseMap) {
            pixels[y*w+x] = c &lt;&lt; shiftAmount; // Bit-shift based on shift amount
          }
        }
      }
    }
  }
  updatePixels();

  if (greyScale) {
    filter(GRAY);
  }

  println("Shift amount: " + shiftAmount + " Frame rate: " + (int) frameRate + " Greyscale: " + greyScale) ;
}

void keyPressed()
// Keyboard controls
{
  switch(keyCode) {
  case UP:
    shiftAmount++;
    break;
  case DOWN:
    shiftAmount--;
    break;
  case LEFT:
    if (grid &gt; 1) {
      grid--;
    }    
    break;
  case RIGHT:
    grid++;    
    break;
  case TAB:
    if (bright) {
      bright = false;
    }
    if (!bright) {
      bright = true;
    }
    break;
  case ENTER:
    if (!greyScale) {
      greyScale = true;
      break;
    }
    if (greyScale) {
      greyScale = false;
      break;
    }
  }
}

void captureEvent(Capture c) { 
  c.read();
} 


// Example Two

import processing.video.*;

Capture cam;

void setup() {
  size(640, 480);

  String[] cameras = Capture.list();
  cam = new Capture(this, width, height);
  cam.start();
  ellipseMode(CENTER);
}

void draw() {
  noStroke();
  background(255);
  for (int i = 0; i &lt; width; i = i+20) {
    for (int j = 0; j &lt; height; j = j+20) {
      fill(cam.get(i, j) * 4);
      ellipse(i, j, 20, 20);
    }
  }

  if (cam.available() == true) {
    cam.read();
  }
  //image(cam, 0, 0);
}
</code></pre>

<p>IF anyone would could help and tell me how I would merge both the codes so either one runs or the other each time I randomly load processing that would be a huge help!</p>
]]></description>
   </item>
   <item>
      <title>Light drawing: make 2 light sources interact with eachother</title>
      <link>https://forum.processing.org/two/discussion/26181/light-drawing-make-2-light-sources-interact-with-eachother</link>
      <pubDate>Tue, 30 Jan 2018 21:58:28 +0000</pubDate>
      <dc:creator>ga77</dc:creator>
      <guid isPermaLink="false">26181@/two/discussions</guid>
      <description><![CDATA[<p>I'm new to processing and have been doing funprogramming.org light drawing tutorial (no. 150). Just wondering if it's possible to include a second light source, maybe a red and a white light, and have something happen when the 2 cross paths. Is this possible and how would I go about it? thanks in advance!</p>
]]></description>
   </item>
   <item>
      <title>Why I receive a NullPointerException error? Line 52</title>
      <link>https://forum.processing.org/two/discussion/26063/why-i-receive-a-nullpointerexception-error-line-52</link>
      <pubDate>Sun, 21 Jan 2018 16:52:17 +0000</pubDate>
      <dc:creator>laladudi</dc:creator>
      <guid isPermaLink="false">26063@/two/discussions</guid>
      <description><![CDATA[<p>The class should take the image and in the black pixels draw a rect of random color.</p>

<pre><code>import processing.video.*;

Capture realTime;
ritrattoClass selfie;
PImage img;

void setup()
{
   size(640,480);
   background(0);
   textSize(40);
   text("Say Cheese and Press the Mouse", 5, width/3);
   realTime= new Capture(this,640,480); 
   realTime.start();
   loadPixels();
}

void captureEvent(Capture realTime)
{
  realTime.read();
}

void draw()
{
  //image(realTime,0,0);
  if (mousePressed == true) {
    realTime.stop();
    img.filter(THRESHOLD);
    image(img,0,0);
    save("selfie.jpg");
    selfie=new ritrattoClass(img);
    selfie.ritratto();
  }
}

void mousePressed() {
  img=realTime.copy(); //saveFrame("selfie.jpg");
}

-----------Class-------------

class ritrattoClass{

  PImage img;
  PImage foto;

  ritrattoClass(PImage foto){
    img=foto;} //nel programma foto sarà una variabile con l'immagine "selfie"


  void ritratto(){
      foto.loadPixels();
      for (int y=0; y&lt;foto.height; y+=3)
      {
       for (int x=0; x&lt;foto.width; x+=3)
       {
         int index = x+foto.width*y;
         color col= foto.pixels[index];
         if (col==0){
         fill(random(255),random(255),random(255));
         rect(x,y,3,3);
         }
       }
      }
    updatePixels();
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to make the ellipse stay on captured colour - video cam</title>
      <link>https://forum.processing.org/two/discussion/25864/how-to-make-the-ellipse-stay-on-captured-colour-video-cam</link>
      <pubDate>Mon, 08 Jan 2018 02:27:01 +0000</pubDate>
      <dc:creator>dyrra</dc:creator>
      <guid isPermaLink="false">25864@/two/discussions</guid>
      <description><![CDATA[<p>I want achive that with everz click on colour, there will appear and staz ellipse, capturing the colour/not to disappear. Is there some easier way than to create separate drawing loops for different ellipses? Just to say something like capture and stay, even if I click again somewhere else? (example from Levin and Shiffman)</p>

<pre><code>import processing.video.*;

Capture video;

// A variable for the color we are searching for.
color trackColor; 

void setup() {
  size(640, 480);
  video = new Capture(this, width, height);
  video.start();
  // Start off tracking black
  trackColor = color(0);
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);

//more higher more arger chance to find the colour 
  float worldRecord = 500; 

// seek for the closest color
  int closestX = 0;
  int closestY = 0;

// search every pixel
  for (int x = 0; x &lt; video.width; x ++ ) {
    for (int y = 0; y &lt; video.height; y ++ ) {
      int loc = x + y*video.width;
// define and showWhat is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

// Using euclidean distance to compare colors
      float d = dist(r1, g1, b1, r2, g2, b2); // We are using the dist( ) function to compare the current color with the color we are tracking.

      // If current color is more similar to tracked color than
      // closest color, save current location and current difference
      if (d &lt; worldRecord) {
        worldRecord = d;
        closestX = x;
        closestY = y;
      }
    }
  }

  // trackcolour not more far than 20 
    if (worldRecord &lt; 20) { 
    // Draw a circle at the tracked pixel
    fill(trackColor);
    strokeWeight(4.0);
    noStroke();
    ellipse(closestX, closestY, 60, 60);
  }
}

void mousePressed() {
  // store the colour from video and keep it tracked
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Face recognition - to visualise hidden background</title>
      <link>https://forum.processing.org/two/discussion/25830/face-recognition-to-visualise-hidden-background</link>
      <pubDate>Fri, 05 Jan 2018 10:36:45 +0000</pubDate>
      <dc:creator>ViciousR</dc:creator>
      <guid isPermaLink="false">25830@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone,</p>

<p>Thank you for looking at my post and hope you can help me. I have a university project and I need to have a hidden background image, and then using face recognition to reveal this image (depending on the location of your face). Here is where I've got so far.</p>

<p>Can't seem to make the background disappear. Thank you in advance</p>

<pre><code>import processing.video.*;
import gab.opencv.*;
import java.awt.*;

PImage photo;
int pixelcount;
color pixelcolor;

Capture firstcam;
OpenCV opencv;

void setup() {
  fullScreen();

  firstcam=new Capture(this, 1600, 900);
  opencv=new OpenCV(this, 1600, 900);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  firstcam.start();
  frameRate(24);
  noFill();

  colorMode(HSB, 360, 100, 100);
  strokeWeight(1);
  smooth();
  photo = loadImage("photo.jpg");
}

void draw() {
  opencv.loadImage(firstcam);
  Rectangle[] faces = opencv.detect();
  image(photo, 0, 0, 1600, 900);
  pushMatrix();
  translate(width, 0);
  scale(-1, 1);

  for (int i = 0 ; i &lt; faces.length ; i++) {
    ellipse(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    stroke(255,0,0);
    lights();
    smooth();
    strokeWeight(1.5);
  }

  popMatrix();
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How do I down-sample video to large pixels and get the pixel array data which will then be output...</title>
      <link>https://forum.processing.org/two/discussion/25731/how-do-i-down-sample-video-to-large-pixels-and-get-the-pixel-array-data-which-will-then-be-output</link>
      <pubDate>Wed, 27 Dec 2017 17:10:32 +0000</pubDate>
      <dc:creator>blvckmonolith</dc:creator>
      <guid isPermaLink="false">25731@/two/discussions</guid>
      <description><![CDATA[<p>Ultimately I'm trying to do this: <a rel="nofollow" href="https://vimeo.com/80364336">https://vimeo.com/80364336</a></p>

<p>Which is taking down-sampled web-cam footage. Sending the large pixel-block-color-information (black and white) array data over UDP (over the network) and have Cinema-4d listen to the UDP data packets with a python effector.</p>

<p>My immediate issue is getting the array pixel data for the large pixels (not every single individual pixel, which I think I'm getting now). I'm not sure what is being output when I use this code in the console though.
Is this code sending every pixels color data in the window to the console?
How do I get just the big down sampled pixel blocks?
Or is it already only sending the rows/columns that have been down-sampled?
I'm a little confused. This stuff is kind of over my head, but I'm trying to figure it out.
Any help would be appreciated.</p>

<p>Here's more info on what I'm trying to do and what my immediate issue is:
I'm trying to print the large pixel values every frame (eventually I'd like to only have black and white values but I haven't gotten that far).
I'm not too sure how exactly Processing 3.3.6 is interpreting this code. Or if this is the wrong way to go about getting the pixel array data clean enough to send to another program and interpret.
 I'm just hacking code together right now like a maniac. I finally got pixel output in the console (I think), though I don't know what the output actually means. It looks like Java (hex info?), but I don't really know exactly.</p>

<p>I'm getting print data like this: [I@8e9278b[I@8e9278b[I@8e9278b[I@8e9278b[I@8e9278b[I@8e9278...</p>

<pre><code>import processing.video.*;

// Size of each cell in the grid, ratio of window size to video size
//Screen Pixels are 80 width and 60 height in the case of 640/480
int videoScale = 8;
// Number of columns and rows in the system
int cols, rows;
// Variable to hold onto Capture object
Capture video;

void setup() {  
  size(640, 480);  
  // Initialize columns and rows  
  cols = width/videoScale;  
  rows = height/videoScale;  
  background(0);
  video = new Capture(this, cols, rows);
  video.start();
}

// Read image from the camera
void captureEvent(Capture video) {  
  video.read();
}

void draw() {
  video.loadPixels();  
  // Begin loop for columns  
  for (int i = 0; i &lt; cols; i++) {    
    // Begin loop for rows    
    for (int j = 0; j &lt; rows; j++) {      
      // Where are you, pixel-wise?      
      int x = i*videoScale;      
      int y = j*videoScale;
      color c = video.pixels[i + j*video.width];
      fill(c);   
      stroke(0);      
      rect(x, y, videoScale, videoScale);

      //Experiment code for capturing video pixel color data so far:
      loadPixels();
      //I'm not sure I need this get() - taking it out still sends data to console.
      get();
      print(pixels);
    }  
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How do I send a downsampled webcam video's color pixels (as greyscale) through UDP?</title>
      <link>https://forum.processing.org/two/discussion/25688/how-do-i-send-a-downsampled-webcam-video-s-color-pixels-as-greyscale-through-udp</link>
      <pubDate>Sat, 23 Dec 2017 14:54:19 +0000</pubDate>
      <dc:creator>blvckmonolith</dc:creator>
      <guid isPermaLink="false">25688@/two/discussions</guid>
      <description><![CDATA[<p>So I'm ultimately trying to do this exactly: <a rel="nofollow" href="https://vimeo.com/80364336">https://vimeo.com/80364336</a></p>

<p>I've talked to Adam about how he did it, and he's using openFrameworks.
He said he takes video from his webcam - down-samples it to those big pixels - then sends the greyscale (0-255) array information through UDP to Cinema 4d, and then has a videoIn python script (as an effector) which takes that information and displays the shading on a matrix of cubes (1 for each pixel in his down-sampled array).</p>

<p>I'm using Processing 3.0 and he said it would work just the same if I got everything working right.
Well I found code that I've gotten to where the output is very similar to his video-downsampled.</p>

<pre><code>import processing.video.*;

// Size of each cell in the grid, ratio of window size to video size
//Screen Pixels are 80 width and 60 height in this case 640/480
//Note: 128 large-pixel width at 1024 and 72 big-pixels at 576 height
int videoScale = 8;
// Number of columns and rows in the system
int cols, rows;
// Variable to hold onto Capture object
Capture video;

void setup() {  
  size(640, 480);  
  // Initialize columns and rows  
  cols = width/videoScale;  
  rows = height/videoScale;  
  background(0);
  video = new Capture(this, cols, rows);
  video.start();
}

// Read image from the camera
void captureEvent(Capture video) {  
  video.read();
}

void draw() {
  video.loadPixels();  
  // Begin loop for columns  
  for (int i = 0; i &lt; cols; i++) {    
    // Begin loop for rows    
    for (int j = 0; j &lt; rows; j++) {      
      // Where are you, pixel-wise?      
      int x = i*videoScale;      
      int y = j*videoScale;
      color c = video.pixels[i + j*video.width];
      fill(c);   
      stroke(0);      
      rect(x, y, videoScale, videoScale);    
    }  
  }
}
</code></pre>

<p>The above code gets me basically in the ballpark as far as a video downsampled to a manageable array size for UDP transfer.
I've been able to get a simple UDP message sent from Processing 3.0 to cinema 4d in a python tag, which as I advance each frame I get the message (which is looping in processing) each frame I move forward. So in theory I'm getting there.</p>

<pre><code>import hypermedia.net.*;

int port = 20000;
String ip ="127.0.0.1";
String message =new String("Hello");
UDP udpTX;

void setup(){
udpTX=new UDP(this);
udpTX.log(true);
noLoop();
}

void draw(){
udpTX.send(message,ip,port);
delay(499);
loop();
}
</code></pre>

<p>With this UDP transfer the string "Hello" gets sent in a loop with a little less than a half second delay.
To the question!
How do I meld the two? Have the camera start showing the down-sampled video and then be sending the black and white "big" pixel color data through the UDP connection?
I'm new to this, and so filling in the gaps is a big challenge but I'm trying! If I can't get help though I probably will have to chalk this one up to being over my head. Hopefully someone here is a genius who can help me. :D</p>
]]></description>
   </item>
   <item>
      <title>How to improve sketch performance</title>
      <link>https://forum.processing.org/two/discussion/25617/how-to-improve-sketch-performance</link>
      <pubDate>Mon, 18 Dec 2017 10:11:04 +0000</pubDate>
      <dc:creator>dehyde</dc:creator>
      <guid isPermaLink="false">25617@/two/discussions</guid>
      <description><![CDATA[<p>I have a sketch that's using a few long arrays and image processing.</p>

<p>Is there a way to make openGL render the sketch, or any other simple "trick" to improve performance?</p>

<p><img src="" alt="" /><img src="https://forum.processing.org/two/uploads/imageupload/342/SAGP6AEML8HV.png" alt="Screenshot_13" title="Screenshot_13" /></p>
]]></description>
   </item>
   <item>
      <title>Get biggest face from openCV</title>
      <link>https://forum.processing.org/two/discussion/25492/get-biggest-face-from-opencv</link>
      <pubDate>Sun, 10 Dec 2017 11:05:56 +0000</pubDate>
      <dc:creator>martinusbar</dc:creator>
      <guid isPermaLink="false">25492@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>This is my first post on this forum, and i'm in need of your help. I'm working on a project with an android that has a set of animatronic eyes and a facetracking feature. I'm using the opencv library and Processing 3.3.6 to execute the tracking, and an arduino t control the eyes. At the moment i've created a script where the eyes are following only ONE face, but sometimes the eyes 'jump' when a new face enters the webcam. I would like to avoid this, so my reasoning was to always get the biggest width of the faces detected and send the x and y coordinates to the arduino. I found similar questions on the forum to 'Get the largest element from an array' but although i understand the logic my sketch keeps on outputting all sets of x and y coordinates detected. A supplementary note is in place that i work heuristically with code and have very basic knowledge of programming languages. Any push in the right direction is highly appreciated. Below the processing code:</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    import processing.serial.*;

    Capture video;
    OpenCV opencv;
    Serial myPort;  // Create object from Serial class

    int newXpos, newYpos;
    //These variables hold the x and y location for the middle of the detected face
    int midFaceX = 0;
    int midFaceY = 0;

    void setup() {
      size(640, 480);
      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

     //println(Serial.list()); // List COM-ports (Use this to figure out which port the Arduino is connected to)
      String portName = Serial.list()[1];
      //select first com-port from the list (change the number in the [] if your sketch fails to connect to the Arduino)
      myPort = new Serial(this, portName, 19200);   //Baud rate is set to 19200 to match the Arduino baud rate.

      video.start();
    }


    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0, 40);
      strokeWeight(3);
      Rectangle[] faces = opencv.detect();

      int maxValueFace = 0;
      int maxIndex = -1;

      for (int i = 0; i &lt; faces.length; i++ ) {

        if (faces[i].width &gt; maxValueFace) {
          maxIndex = i;
          maxValueFace = faces[i].width;
          //println(maxValueFace);

        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); //
        midFaceX = faces[i].x + (faces[i].width/2); // middle of the face
        midFaceY = faces[i].y + (faces[i].height/2); // middle of the face
        float xpos = map(midFaceX, 0, width, 90, 120); //maps range of servos L-&gt;R
        float ypos = map(midFaceY, 0, height, 90, 120); //maps range of servos U-&gt;D
        int newXpos = (int)xpos; //converts position X float into integer
        int newYpos = (int)ypos; //converts position Y float into integer
        myPort.write(newXpos+"x"); // send X coordinate to Arduino
        myPort.write(newYpos+"y"); // send Y coordinate to Arduino
        println(midFaceX + "," + midFaceY);
        }
      }
    }

    void captureEvent(Capture c) {
      c.read();
    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to trigger an action with face detection?</title>
      <link>https://forum.processing.org/two/discussion/25482/how-to-trigger-an-action-with-face-detection</link>
      <pubDate>Sat, 09 Dec 2017 23:44:10 +0000</pubDate>
      <dc:creator>arnolds112</dc:creator>
      <guid isPermaLink="false">25482@/two/discussions</guid>
      <description><![CDATA[<p>Hello,
Is it possible to use face detection to trigger a video to play?
if webcam detects a face, video plays
im having trouble with achieving this</p>
]]></description>
   </item>
   <item>
      <title>Interactive image using Face detection (OpenCV)</title>
      <link>https://forum.processing.org/two/discussion/25470/interactive-image-using-face-detection-opencv</link>
      <pubDate>Sat, 09 Dec 2017 11:05:54 +0000</pubDate>
      <dc:creator>Pharaonn</dc:creator>
      <guid isPermaLink="false">25470@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>there is a program I would like to make with a face detector using OpenCV. So basically you would have an image A (could be a video) that when a face is close enough to the camera, slowly morphs into an image B. 
So far, I got to make the program change the images whether or not the face is recognized but now I would like to : 
1) tell the face detector to detect the face only when it's around 3feet (1meter) away so the image can change only in that moment.
2) make the "image changing" really smooth and progressive (maybe if the two images merge together using opacity or something ?)</p>

<p>I am new to Processing and even more to OpenCv that's why I would be so gload if someone has the solution or can help me !</p>

<p>I quoted my code if it helps… The images can be replaced (btw, they are scaled up when the program runs and I don't understand why… that's another problem but not the most important one)</p>

<blockquote class="Quote">
  <p>import gab.opencv.*; 
  import processing.video.*; 
  import java.awt.Rectangle;
   PImage image;
   PImage flou;
   PImage nette;
  Capture cam; 
  OpenCV opencv; 
  Rectangle[] faces;
   
  void setup() { 
    fullScreen(); 
    background (0, 0, 0); 
    cam = new Capture( this, 640, 480, 30); 
    cam.start(); 
    opencv = new OpenCV(this, cam.width, cam.height); 
    opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
    image=loadImage("image.jpg");
    nette=loadImage("nette.jpg");
    flou=loadImage("flou.jpg");
  }
   
  void draw() { 
    opencv.loadImage(cam); 
    faces = opencv.detect(); 
    image(cam, 0, 0); 
   
    if (faces!=null) { 
      for (int i=0; i&lt; faces.length; i++) { 
        image(nette,0,0);
        noFill(); 
        stroke(255, 255, 0); 
        strokeWeight(10);<br />
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    } 
    if (faces.length&lt;=0) { 
      textAlign(CENTER); 
      fill(255, 0, 0); 
      textSize(56); 
      println("no faces");
      image(flou,0,0);
      text("UNDETECTED", 200, 100);
    }
  }
   
  void captureEvent(Capture cam) { 
    cam.read();}</p>
</blockquote>

<p>Thank you so much !!</p>
]]></description>
   </item>
   <item>
      <title>Fullframe Video Capture</title>
      <link>https://forum.processing.org/two/discussion/25304/fullframe-video-capture</link>
      <pubDate>Thu, 30 Nov 2017 17:05:06 +0000</pubDate>
      <dc:creator>bonniem</dc:creator>
      <guid isPermaLink="false">25304@/two/discussions</guid>
      <description><![CDATA[<p>Does anyone know how to get this video capture code working at fullScreen( )? When I change it to full screen it does not work at all and I know it is based on the fact this code was written for a specific size and a lot is based on that size.</p>

<p>This is a modification of Daniel Shiffman's Video - Software Mirrors code.  <a href="https://processing.org/tutorials/video/" target="_blank" rel="nofollow">https://processing.org/tutorials/video/</a></p>

<p>Thanks!</p>

<pre><code>import processing.video.*;

// Size of each cell in the grid, ratio of window size to video size
int videoScale = 20;

// Number of columns and rows in the system
int cols, rows;

// Variable to hold onto Capture object
Capture video;

// used for the noise randomness
float xoff = 0.0;
float yoff = 0.0;
float coff = 0.0;


void setup() {  
  size(1280, 720); 
  //fullScreen( );

  // Initialize columns and rows  
  cols = width/videoScale;  
  rows = height/videoScale;  
  background(0);
  video = new Capture(this, cols, rows);
  video.start();
}

// Read image from the camera
void captureEvent(Capture video) {  
  video.read();
}

void draw() {
  video.loadPixels(); 

  // Begin loop for columns  
  for (int i = 0; i &lt; cols; i++) {    
    // Begin loop for rows    
    for (int j = 0; j &lt; rows; j++) {            
      int x = i*videoScale;      
      int y = j*videoScale;
      color c = video.pixels[i + j*video.width];

      fill(c);   
      stroke(0); 

      //rotate the rect
      xoff = xoff + .01;
      float n = noise(xoff) * width;
      pushMatrix( );
      translate(x + videoScale/2, y + videoScale/2);
      rotate(radians(45 + n));
      translate(-(x + videoScale/2), -(y + videoScale/2));

      // draw the rect with the video color
      rect(x, y, videoScale/1.9, videoScale/1.9); 
      popMatrix( );
    }
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Horizontal flip on Blob Detection library sketch</title>
      <link>https://forum.processing.org/two/discussion/25182/horizontal-flip-on-blob-detection-library-sketch</link>
      <pubDate>Fri, 24 Nov 2017 17:27:29 +0000</pubDate>
      <dc:creator>jyerbury</dc:creator>
      <guid isPermaLink="false">25182@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>I am using the BlobDetection library for a project and I would like to be able to flip the resulting video horizontally (so that it is like a mirror). I have so far only been able to flip the underlying video, but not the video output that the blobs show on.</p>

<p>I have tried and tried, so if anyone has any insight as to how to accomplish this I would be so grateful!</p>

<p>This is the code I am trying to use to do the flip, but it only works on the underlying video.</p>

<pre><code>  scale (-1,1);
  image(img, -width, height);
</code></pre>

<p>This is the entire sketch :</p>

<pre><code>// - Super Fast Blur v1.1 by Mario Klingemann &lt;<a href="http://incubator.quasimondo.com&gt" target="_blank" rel="nofollow">http://incubator.quasimondo.com&gt</a>;
// - BlobDetection library

import processing.video.*;
import blobDetection.*;

Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;

void setup()
{
  size(640, 480);
  cam = new Capture(this, 40*4, 30*4, 15);
        cam.start();

  // BlobDetection
  // img which will be sent to detection (a smaller copy of the cam frame);
  img = new PImage(80,60); 
  theBlobDetection = new BlobDetection(img.width, img.height);
  scale (-1,1);
  image(img, -width, height);
  theBlobDetection.setPosDiscrimination(true);
  theBlobDetection.setThreshold(0.5f); // will detect bright areas whose luminosity &gt; 0.2f;
}

void captureEvent(Capture cam)
{
  cam.read();
  newFrame = true;
}

void draw()
{
  if (newFrame)
  {
    newFrame=false;
    image(cam,0,0,width,height);
    img.copy(cam, 0, 0, cam.width, cam.height, 
        0, 0, img.width, img.height);
    fastblur(img, 2);
    theBlobDetection.computeBlobs(img.pixels);
    drawBlobsAndEdges(false,true);  

    loadPixels();
  color R =  color (255,0,0);   
  for (int x= 0; x &lt; width; x++){
    int check = 0;
    for (int y = 0; y &lt; height; y++){
     if  (pixels[x+y*width] == R) {
     pixels[x+y*width] = R;
     } else if(x == R){
       //pixels[x+y*width] =

     }
    }
  }
updatePixels();
  }
}

void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
  Blob b;
  EdgeVertex eA,eB;
  for (int n=0 ; n&lt;theBlobDetection.getBlobNb() ; n++)
  {
    b=theBlobDetection.getBlob(n);
    if (b!=null)
    {
      // Edges
      if (drawEdges)
      {
        strokeWeight(3);
        fill(0,255,0);
        stroke(255,0,0);
          beginShape();



        for (int m=0;m&lt;b.getEdgeNb();m++)
        {
          eA = b.getEdgeVertexA(m);
          eB = b.getEdgeVertexB(m);
          if (eA !=null &amp;&amp; eB !=null){
              vertex(eA.x*width, eA.y*height);
              vertex(eB.x*width, eB.y*height);
          }

        }
        endShape(CLOSE);
      }

      // Blobs
      if (drawBlobs)
      {
        strokeWeight(1);
        stroke(0,255,0);
        rect(
          b.xMin*width,b.yMin*height,
          b.w*width,b.h*height
          );
      }

    }

      }
}

// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann 
// &lt;<a href="http://incubator.quasimondo.com&gt" target="_blank" rel="nofollow">http://incubator.quasimondo.com&gt</a>;
// ==================================================
void fastblur(PImage img,int radius)
{
 if (radius&lt;1){
    return;
  }
  int w=img.width;
  int h=img.height;
  int wm=w-1;
  int hm=h-1;
  int wh=w*h;
  int div=radius+radius+1;
  int r[]=new int[wh];
  int g[]=new int[wh];
  int b[]=new int[wh];
  int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;
  int vmin[] = new int[max(w,h)];
  int vmax[] = new int[max(w,h)];
  int[] pix=img.pixels;
  int dv[]=new int[256*div];
  for (i=0;i&lt;256*div;i++){
    dv[i]=(i/div);
  }

  yw=yi=0;

  for (y=0;y&lt;h;y++){
    rsum=gsum=bsum=0;
    for(i=-radius;i&lt;=radius;i++){
      p=pix[yi+min(wm,max(i,0))];
      rsum+=(p &amp; 0xff0000)&gt;&gt;16;
      gsum+=(p &amp; 0x00ff00)&gt;&gt;8;
      bsum+= p &amp; 0x0000ff;
    }
    for (x=0;x&lt;w;x++){

      r[yi]=dv[rsum];
      g[yi]=dv[gsum];
      b[yi]=dv[bsum];

      if(y==0){
        vmin[x]=min(x+radius+1,wm);
        vmax[x]=max(x-radius,0);
      }
      p1=pix[yw+vmin[x]];
      p2=pix[yw+vmax[x]];

      rsum+=((p1 &amp; 0xff0000)-(p2 &amp; 0xff0000))&gt;&gt;16;
      gsum+=((p1 &amp; 0x00ff00)-(p2 &amp; 0x00ff00))&gt;&gt;8;
      bsum+= (p1 &amp; 0x0000ff)-(p2 &amp; 0x0000ff);
      yi++;
    }
    yw+=w;
  }

  for (x=0;x&lt;w;x++){
    rsum=gsum=bsum=0;
    yp=-radius*w;
    for(i=-radius;i&lt;=radius;i++){
      yi=max(0,yp)+x;
      rsum+=r[yi];
      gsum+=g[yi];
      bsum+=b[yi];
      yp+=w;
    }
    yi=x;
    for (y=0;y&lt;h;y++){
      pix[yi]=0xff000000 | (dv[rsum]&lt;&lt;16) | (dv[gsum]&lt;&lt;8) | dv[bsum];
      if(x==0){
        vmin[y]=min(y+radius+1,hm)*w;
        vmax[y]=max(y-radius,0)*w;
      }
      p1=x+vmin[y];
      p2=x+vmax[y];

      rsum+=r[p1]-r[p2];
      gsum+=g[p1]-g[p2];
      bsum+=b[p1]-b[p2];

      yi+=w;
    }
  }

}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Strange things happen while trying to draw an average frame using a recursive formula.</title>
      <link>https://forum.processing.org/two/discussion/24955/strange-things-happen-while-trying-to-draw-an-average-frame-using-a-recursive-formula</link>
      <pubDate>Fri, 10 Nov 2017 19:23:32 +0000</pubDate>
      <dc:creator>Rakerson</dc:creator>
      <guid isPermaLink="false">24955@/two/discussions</guid>
      <description><![CDATA[<p>Hey !
I want to draw on a screen, an average frame using formula like this: aFrame = 0.99aFrame + 0.01CurrentFrame; (CurrentFrame is an actual frame taken from cam).
Ofcourse at the beginning "aFrame" contain diffrent value but with  each iteration (new frame loaded) it its closer to CurrentFrame value. After about 10-30 (considering we are using 30 fps camera and the image is static) aFrame should be equal to CurrentFrame and we should get picture like this:
<img src="" alt="" /><img src="https://forum.processing.org/two/uploads/imageupload/299/JPY568RN7PKT.JPG" alt="test1" title="test1" />
Instead, what I get is this:
<img src="https://forum.processing.org/two/uploads/imageupload/694/PIGHD5I1BMX7.JPG" alt="test2" title="test2" /></p>

<p>Here is a simple code. What is wrong with these calculations ? Why aren't the rc,gc,bc values getting closer to current frame values ?</p>

<pre><code>import processing.video.*; 
import processing.serial.*; 

Capture video;
Serial myPort; 

void setup() 
{
  size(640, 480);
  video = new Capture(this, width, height, 30);
  video.start();
}

void captureEvent(Capture video) 
{ 
  video.read(); 
}

void draw() 
{  
  video.loadPixels();
  image(video, 0, 0);

  loadPixels();

   float rc = 0; // Initialaizing values are 0, so at the start, strange thinks may display 
   float gc = 0; 
   float bc = 0;

  for (int x = 0; x &lt; video.width; x++ ) 
  {
    for (int y = 0; y &lt; video.height; y++ ) 
    {
      int loc = x + y * video.width;

      color currentColor = video.pixels[loc]; //current values, taken from cam
      float r3 = red(currentColor);
      float g3 = green(currentColor);
      float b3 = blue(currentColor);

      rc = rc*0.99 + r3*0.01;        //with each iterations, rc,gc,bc values are closer to current frame
      gc = gc*0.99 + g3*0.01;        //so after some time, these values will be almost equal to current r3, g3, b3
      bc = bc*0.99 + b3*0.01;

      pixels[loc] = color(rc,gc,bc); //I am drawing the values on screen, after some time, I should get clear a clear picture. 
    }
  }
  updatePixels();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Modifying motion detection. How to create average frame from two frames ?</title>
      <link>https://forum.processing.org/two/discussion/24794/modifying-motion-detection-how-to-create-average-frame-from-two-frames</link>
      <pubDate>Mon, 30 Oct 2017 01:11:32 +0000</pubDate>
      <dc:creator>Rakerson</dc:creator>
      <guid isPermaLink="false">24794@/two/discussions</guid>
      <description><![CDATA[<p>Hey guys ! 
I'am trying to modify code for motion detection from Dan's tutorial:</p>

<p><span class="VideoWrap"><span class="Video YouTube" id="youtube-QLHMtE5XsMs"><span class="VideoPreview"><a href="http://youtube.com/watch?v=QLHMtE5XsMs"><img src="http://img.youtube.com/vi/QLHMtE5XsMs/0.jpg" width="640" height="385" border="0" /></a></span><span class="VideoPlayer"></span></span></span></p>

<p>In his original code, he is comparing current frame only with the previous one. What I'am trying to do is comparing current frame with the average frame of two previous frames (actually I'am taking 99% pixel value of pre-previous frame and 1% pixel value of previous frame. It should make detection more accurate). I edited "captureEvent" function so it can load second frame and added new "GetFrame" function which takes two frames as argument and returns an average frame. In theory it should work but it doesn't and I only get gray, empty window. What did I screw up ? I know code is a little bit long but I suspect mistake to be in "GetFrame" or "captureEvent" function.</p>

<p>EDIT:
I have problem placing code here using markers in editor so Iam pasting it from pastebin: 
<a href="https://pastebin.com/jkBuex0v" target="_blank" rel="nofollow">https://pastebin.com/jkBuex0v</a></p>
]]></description>
   </item>
   <item>
      <title>OpenCV kill old Faces loop</title>
      <link>https://forum.processing.org/two/discussion/24820/opencv-kill-old-faces-loop</link>
      <pubDate>Tue, 31 Oct 2017 16:40:17 +0000</pubDate>
      <dc:creator>corbinyo</dc:creator>
      <guid isPermaLink="false">24820@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,
I am using openCV and face detection to control the transparency of images. The position of the detected face on the x axis controls transparency. What I would like to do is be able to ignore the other faces that get picked up by the webcam. Is there a way to get the value of face 1 and ignore face 2, face 3, face 4 etc., but upon face 1 being killed, make face 2 = face 1, face 3 = face 2 etc., and constantly only use the data from face 1 as the controlling parameter?</p>

<p>I have looked into the OpenCV example (Example: Whichface) and I can't seem to wrangle it to what i need it for.
Here is my existing code</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;

    Capture video;
    OpenCV opencv;

    PImage ed;
    PImage genie;
    int offset = 0;

    float easing = 0.05;

    void setup() {

    size(724,960);


      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
      video.start();
      ed = loadImage("ed.jpg");
      genie = loadImage("genie.jpg");

    }




    void draw() {

      scale(1);
      opencv.loadImage(video);

      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);

       Rectangle[] faces = opencv.detect();
      println(faces.length);




    for (int i = 0; i &lt; faces.length; i++) {



 tint(255, 230);  // Display at half opacity
image(genie, 0, 0);  // Display at full opacity


  int dx = (faces[i].x-genie.width/2) - offset;
  offset += dx * easing;
  tint(255, faces[i].x);  // Display at half opacity
  image(ed, 0, 0);



   }
} 





void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to check video camera</title>
      <link>https://forum.processing.org/two/discussion/11925/how-to-check-video-camera</link>
      <pubDate>Sat, 01 Aug 2015 04:27:26 +0000</pubDate>
      <dc:creator>Asterion</dc:creator>
      <guid isPermaLink="false">11925@/two/discussions</guid>
      <description><![CDATA[<p>I have used the basic example code:</p>

<pre><code>import processing.video.*;

Capture cam;

void setup() {
  size(640, 480);

  String[] cameras = Capture.list();

  if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    println("Available cameras:");
    for (int i = 0; i &lt; cameras.length; i++) {
      println(cameras[i]);
    }

    // The camera can be initialized directly using an 
    // element from the array returned by list():
    cam = new Capture(this, 640, 480,5);
    cam.start();     
  }      
}

void draw() {

  image(cam, 0, 0);
}

void captureEvent(Capture c) {
  if (cam.available() == true) {
    cam.read();
  }
}
</code></pre>

<p>I get a list of cameras including my bloggie listed as:</p>

<pre><code>Available cameras:
name=MHS-FS2,size=640x480,fps=5
name=MHS-FS2,size=640x480,fps=30
name=MHS-FS2,size=320x240,fps=5
name=MHS-FS2,size=320x240,fps=30
name=MHS-FS2,size=1280x720,fps=5
name=MHS-FS2,size=1280x720,fps=30
</code></pre>

<p>I get a black screen however when the code runs.</p>

<p>The camera is working because I have run it separately using MyCam.</p>

<p>So, camera appears working as a usb webcam fine.</p>

<p>Capture.list() "finds" the camera attached on the USB port.</p>

<p>Nothing gets displayed.</p>

<p>I have tried various strings for the camera name to try to get <code>cam = new Capture(this, 640, 480,"",5)</code> to pick up a specific camera but nothing has worked so far.</p>

<p>I have tried <code>cam = new Capture(this,cameras[0])</code> but still does not result in anything other than a black dialog.</p>

<p>The camera is not obstructed nor is there a lens cap etc.</p>

<p>What might be the problem here?</p>

<p>I am using a Bloggie Duo, set up as webcam, running Processing 2.2.1 on Windows 7.</p>

<p>Thanks,</p>

<p>A</p>
]]></description>
   </item>
   <item>
      <title>How to put the first code in the second code?</title>
      <link>https://forum.processing.org/two/discussion/24558/how-to-put-the-first-code-in-the-second-code</link>
      <pubDate>Sun, 15 Oct 2017 09:47:49 +0000</pubDate>
      <dc:creator>mmmm23</dc:creator>
      <guid isPermaLink="false">24558@/two/discussions</guid>
      <description><![CDATA[<pre><code>    int n = 256;
    int minRad = 50; //minimum radius
    int maxRad = 600;//maximum radius
    float nfAng = 0.01; // angle
    float nfTime = 0.005;//at every 0.005 the shape is being developed
    int outnum;



    void setup() {
       fullScreen();

      pixelDensity(displayDensity());
      background (255);// bg color
      noFill();
      stroke(0, 15);
    }

    void draw() {

        background(255);
         } else {
        translate(width/2, height/2); //Specifies an amount to displace objects within the canvas
      beginShape(); //function that begins recording vertices for a shape
      for (int i=0; i&lt;n; i++) {
        float ang = map(i, 0, n, 0,TWO_PI); //the map() is converting the value in i which ranges from 0 to n into a number from 0 to TWO_PI(a mathematical constant)
        float rad = map(noise(i*nfAng, frameCount*nfTime), 0, 1, minRad, maxRad);
       // the radius using noise() to generate a 'random' number between minRad and maxRad so the resulting circle won't be round but will be wiggly.
        float x = rad * cos(ang);// printing on screen
        float y = rad * sin(ang);//printing on screen
        curveVertex(x, y);// Specifies vertex coordinates for curves.


      } 
      }
      endShape(CLOSE);// funtion that stops recording verticles for a shape
    }
</code></pre>

<p>SECOND CODE</p>

<pre><code>import processing.video.*;

Capture video;
PImage prevFrame;

float threshold = 50;
float totalMotion;
float avgMotion;

int a = 0, mw, mh, r = 100;
float nC = 110;
boolean addMode = false;

void setup() {
  size(640, 360);
  background(255);
  noStroke();

  mw = width/2;
  mh = height/2;

  video = new Capture(this, width, height);
  video.start();
  prevFrame = createImage(video.width, video.height, RGB);


}

void draw() {
  video.loadPixels();
  prevFrame.loadPixels();

  totalMotion = 0;

  for (int i = 0; i &lt; video.pixels.length; i ++ ) {
    color current = video.pixels[i];
    color previous = prevFrame.pixels[i];
    float r1 = red(current); 
    float g1 = green(current);
    float b1 = blue(current);
    float r2 = red(previous); 
    float g2 = green(previous);
    float b2 = blue(previous);
    float diff = dist(r1, g1, b1, r2, g2, b2);
    totalMotion += diff;
  }

  avgMotion = totalMotion / video.pixels.length; 

  fill(0,50);
  rect(0,0,width,height);
  for (int i = 1; i &lt;= nC; i++){
    fill(0,150+sin(radians(a+(360/nC)*i))*55,200+cos(a+(360/nC)*i)*55);
    ellipse(mw+sin(radians(a+(360/nC)*i))*r,mh+cos(radians(a+(360/nC)*i))*r,10*(r/100),10*(r/100));  
  }

  a++;

  if (avgMotion &gt; 35)
  {

    r += 5;
    if (addMode)
    {
      nC += .2;
    }
  } else if (r &gt; 100) {
    r -= 10;
  }


  println(avgMotion);
}

void captureEvent(Capture video) {
  prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
  prevFrame.updatePixels();
  video.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Set specific pixels fully transparent in PGraphics</title>
      <link>https://forum.processing.org/two/discussion/24341/set-specific-pixels-fully-transparent-in-pgraphics</link>
      <pubDate>Mon, 02 Oct 2017 09:30:48 +0000</pubDate>
      <dc:creator>iiyeo</dc:creator>
      <guid isPermaLink="false">24341@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone, I am doing a demo about background subtraction and people extraction. I already picked the foreground pixels out of the whole pixels through comparing the different pixels between the static scene and later a people enter in this scene. I use createGraphics to store the foreground pixels and I would like to get a fully transparent background png. However, I only get a series of png files as normal camera feed frame, not the foreground extraction. I think the key section is here: 
    if (diff &gt; threshold) { 
    pixels[loc] = fgColor; 
    } else { 
    pixels[loc] = color(0, 0); }
I always get the error "NullPointerException" for the line" pg.pixels[loc] = color(0, 0); " so that I couldn't set the rest pixels as fully transparent.</p>

<p>Is there anyone have any idea about that? I have been trapped for too long but have no idea. Any help will be appreciated.</p>

<p>Here is the whole code if necessary:</p>

<pre><code>import processing.video.*;

Capture video;

PGraphics pg;

PImage backgroundImage;
float threshold = 30;

void setup() {
  size(320, 240);
  video = new Capture(this, width, height);
  video.start();

  backgroundImage = createImage(video.width, video.height, RGB); 
  pg = createGraphics(320, 240);
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {

  loadPixels();
  video.loadPixels();
  backgroundImage.loadPixels();
  //pg.noSmooth();
  pg.beginDraw();
  pg.background(0, 0);

  pg.image(video,0,0);
  for (int x = 0; x &lt; video.width; x++) {
    for (int y = 0; y &lt; video.height; y++) {
      int loc = x + y * video.width;

color fgColor = video.pixels[loc];
color bgColor = backgroundImage.pixels[loc];

float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor);
float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);

//pg.loadPixels();


if (diff &gt; threshold) {

  pg.pixels[loc] = fgColor;
} else {

  pg.pixels[loc] = color(0, 0); //color(gray, alpha)
  // pg.clear(); clears everything in a PGraphics object to make all of the pixels 100% transparen
}
    }
  }
  noFill();

    pg.updatePixels();
    pg.endDraw();
    image(pg, 0, 0);
    pg.save("image_" + millis() + ".png");
}

void mousePressed() {
  backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
  backgroundImage.updatePixels();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Save frames of background subtraction capture</title>
      <link>https://forum.processing.org/two/discussion/24257/save-frames-of-background-subtraction-capture</link>
      <pubDate>Mon, 25 Sep 2017 22:21:55 +0000</pubDate>
      <dc:creator>iiyeo</dc:creator>
      <guid isPermaLink="false">24257@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone, I am doing a background subtraction capture demo recently but I met with difficulties. I have already get the pixel of silhouette extraction and I intend to draw it into a buffer through createGraphics(). I set the new background is 100% transparent so that I could only get the foreground extraction. Then I use saveFrame() function in order to get png file of each frame. However, it doesn't work as I expected.I intend to get a series of png of the silhouette extraction with 100% transparent background but now I only get the general png of frames from the camera feed.  Is there anyone could help me to see what's the problem with this code? Thanks a lot in advance. Any help will be appreciated.</p>

<pre><code>import processing.video.*;

Capture video;

PGraphics pg;

PImage backgroundImage;
float threshold = 30;

void setup() {
  size(320, 240);
  video = new Capture(this, width, height);
  video.start();

  backgroundImage = createImage(video.width, video.height, RGB); 
  pg = createGraphics(320, 240);
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {
  pg.beginDraw();

  loadPixels();
  video.loadPixels();
  backgroundImage.loadPixels();

  image(video, 0, 0);
  for (int x = 0; x &lt; video.width; x++) {
    for (int y = 0; y &lt; video.height; y++) {
      int loc = x + y * video.width;



color fgColor = video.pixels[loc];
color bgColor = backgroundImage.pixels[loc];

float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor);
float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);


if (diff &gt; threshold) {
  pixels[loc] = fgColor;
} else {
  pixels[loc] = color(0, 0);
}
    }}
    pg.updatePixels();
    pg.endDraw();


    saveFrame("line-######.png");
}


void mousePressed() {
  backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
  backgroundImage.updatePixels();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Exporting a video of foreground pixels</title>
      <link>https://forum.processing.org/two/discussion/24204/exporting-a-video-of-foreground-pixels</link>
      <pubDate>Thu, 21 Sep 2017 14:41:44 +0000</pubDate>
      <dc:creator>iiyeo</dc:creator>
      <guid isPermaLink="false">24204@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone, I am doing a project about person extraction and background subtraction. I am going to export a video of the foreground people. Learnt from Daniel Shiffman, now I could get the pixels of foreground but I don't know how to export these pixels as a video. Or I don't need to export it immediately, but I need to do the further processing about these pixels as a video format.  Is there anyone could help me? Thanks a lot in advance. Sorry for my English if there is any mistake.</p>

<p>Here is the code:</p>

<pre><code>import processing.video.*;

Capture video;

PImage backgroundImage;
float threshold = 30;

void setup() {
  size(320, 240);
  video = new Capture(this, width, height);
  video.start();

  backgroundImage = createImage(video.width, video.height, RGB); 
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {
  loadPixels();
  video.loadPixels();
  backgroundImage.loadPixels();

  image(video, 0, 0);
  for (int x = 0; x &lt; video.width; x++) {
    for (int y = 0; y &lt; video.height; y++) {
      int loc = x + y * video.width;



color fgColor = video.pixels[loc];
color bgColor = backgroundImage.pixels[loc];

float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor);
float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);

if (diff &gt; threshold) {
  pixels[loc] = fgColor;
} else {
  pixels[loc] = color(0, 0, 0);
}
    }}
    updatePixels();
}


void mousePressed() {
  backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
  backgroundImage.updatePixels();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Face detection - OpenCV: If clauses to act when no face is detected - HELP PLEASE!</title>
      <link>https://forum.processing.org/two/discussion/24052/face-detection-opencv-if-clauses-to-act-when-no-face-is-detected-help-please</link>
      <pubDate>Thu, 07 Sep 2017 09:52:28 +0000</pubDate>
      <dc:creator>EmRod</dc:creator>
      <guid isPermaLink="false">24052@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,</p>

<p>I'm working on a face detection code that will put a square around your face when its detected, but will display text or an image when no face is detected on screen. Ideally I'd like it to respond after a couple of second of no detection but anything is better than nothing!</p>

<p>I currently have the face detection code working. But when trying to add an if clause for null faces make an action - it doesn't work when I run the code - any help is greatly appreciated.</p>

<p>Here is the current code:</p>

<p>import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;</p>

<p>Capture cam;
OpenCV opencv;
Rectangle[] faces;</p>

<p>void setup() {
  size(640, 480, P2D);
  background (0, 0, 0);
  cam = new Capture( this, 640, 480, 30);
  cam.start();
  opencv = new OpenCV(this, cam.width, cam.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}</p>

<p>void draw() {
  opencv.loadImage(cam);
  faces = opencv.detect();
  image(cam, 0, 0);
  if (faces!=null) {
    for (int i=0; i&lt; faces.length; i++) {
      noFill();
      stroke(255, 255, 0);
      strokeWeight(10);
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }
  if (faces == null) {
    textAlign(CENTER);
    fill(255, 0, 0);
    textSize(56);
    text("UNDETECTED", 100, 100);
  }
}</p>

<p>void captureEvent(Capture cam) {
  cam.read();
}</p>
]]></description>
   </item>
   <item>
      <title>Live video delay effect, where from here?</title>
      <link>https://forum.processing.org/two/discussion/23861/live-video-delay-effect-where-from-here</link>
      <pubDate>Sat, 19 Aug 2017 14:12:25 +0000</pubDate>
      <dc:creator>ekind</dc:creator>
      <guid isPermaLink="false">23861@/two/discussions</guid>
      <description><![CDATA[<p>Hi! I've just started my studies on live video manipulation in processing. Today I created this simple delay effect, which feels like a good starting points for more advanced stuff. Any ideas on how to develop this further? I've been experimenting with translation and rotation which renders pretty interesting results. I've tried some stuff with colors but no luck there yet.</p>

<p><em>press and hold any key to start sampling and effect</em></p>

<pre><code>import processing.video.*;

Capture camera;
int scale = 1;
void setup()
{
  size(640, 480, P3D);
  colorMode(HSB, 255);

  String[] cameras = Capture.list();

  camera = new Capture(this, width/scale, height/scale, cameras[0]);
  camera.start();

  // init feedback loop
  frames = new PImage[frame_count];
  for(int i = 0; i &lt; frame_count; i++)
  {
      frames[i] = createImage(0, 0, 0);
  }
}

int frame_count = 10;
PImage[] frames;
boolean feedback = false;
void draw()
{
  background(255);

  if(keyPressed)
  {
    image(frames[index], 0, 0);
    for(int i = 0; i &lt; frame_count; i++)
    {
      pushMatrix();
      pushStyle();

      tint(255, 255 / frame_count);
      //translate(0, 0, i * 80 / frame_count);
      image(frames[(index + i) % frame_count], 0, 0);

      popStyle();
      popMatrix();
    }
  }
  else
  {
    image(camera, 0, 0);
  }

  text(frameRate, 20, 20);
}


int index = 0;
void captureEvent(Capture camera)
{
  camera.read();
  PImage p = camera;

  if(keyPressed)
  {
    frames[index] = p.copy();
  }

  index += 1;
  if(index == frame_count)
  {
    index = 0;
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>I need to saveFrame() while showing a video on the sketch display screen</title>
      <link>https://forum.processing.org/two/discussion/23435/i-need-to-saveframe-while-showing-a-video-on-the-sketch-display-screen</link>
      <pubDate>Thu, 13 Jul 2017 16:26:18 +0000</pubDate>
      <dc:creator>m4rdones</dc:creator>
      <guid isPermaLink="false">23435@/two/discussions</guid>
      <description><![CDATA[<p>Hi. I have a new question about code. 
I'm trying to save frames from a web cam while a video is playing in the screen.</p>

<p>I have this code that allows me to save frames, but I don't want to save frames from the displayed video. What I'm not seeing?</p>

<p>Thanks in advance!</p>

<pre><code>import processing.video.*;

Capture cam;
Movie video;

boolean showVideo=false;
boolean saveframe=false; 

void setup() { 
  size(640, 480); 
  cam = new Capture(this, 640, 480, 30);
  video = new Movie(this, "ex.mp4");
  cam.start();
} 


void draw() {
  background(0);
  if (showVideo==false) { 
    image(cam, 0, 0);
  } else {
    image(video, 0, 0);   
  }
  if (saveframe==true) {saveFrame("output.##.jpg");
  }
}

void keyPressed() {
  showVideo=true;
  video.loop();
  saveframe=true;
}

void keyReleased() {
  showVideo=false;
  video.stop();
  saveframe=false;
}

void captureEvent(Capture webCam) {
  webCam.read();
}

void movieEvent(Movie m) {
  m.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How I can only move colored objects that I clicked on.</title>
      <link>https://forum.processing.org/two/discussion/22896/how-i-can-only-move-colored-objects-that-i-clicked-on</link>
      <pubDate>Sat, 03 Jun 2017 11:07:10 +0000</pubDate>
      <dc:creator>Loveiu</dc:creator>
      <guid isPermaLink="false">22896@/two/discussions</guid>
      <description><![CDATA[<p>i used import video.
How I can only move colored objects that I clicked on.
Not moving in a similar color.
When I click on an object with a specific color, I want to make a circle on the object and move the object so that the circle moves.
<a href="http://learningprocessing.com/examples/chp16/example-16-11-ColorTrack" target="_blank" rel="nofollow">http://learningprocessing.com/examples/chp16/example-16-11-ColorTrack</a>
<code>// Example 16-11: Simple color tracking</code></p>

<pre><code>import processing.video.*;

// Variable for capture device
Capture video;

// A variable for the color we are searching for.
color trackColor; 

void setup() {
  size(320, 240);
  video = new Capture(this, width, height);
  video.start();
  // Start off tracking for red
  trackColor = color(255, 0, 0);
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);

  // Before we begin searching, the "world record" for closest color is set to a high number that is easy for the first pixel to beat.
  float worldRecord = 500; 

  // XY coordinate of closest color
  int closestX = 0;
  int closestY = 0;

  // Begin loop to walk through every pixel
  for (int x = 0; x &lt; video.width; x ++ ) {
    for (int y = 0; y &lt; video.height; y ++ ) {
      int loc = x + y*video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      // Using euclidean distance to compare colors
      float d = dist(r1, g1, b1, r2, g2, b2); // We are using the dist( ) function to compare the current color with the color we are tracking.

      // If current color is more similar to tracked color than
      // closest color, save current location and current difference
      if (d &lt; worldRecord) {
        worldRecord = d;
        closestX = x;
        closestY = y;
      }
    }
  }

  // We only consider the color found if its color distance is less than 10. 
  // This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
  if (worldRecord &lt; 10) { 
    // Draw a circle at the tracked pixel
    fill(trackColor);
    strokeWeight(4.0);
    stroke(0);
    ellipse(closestX, closestY, 16, 16);
  }
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}
</code></pre>

<p>In the example above, clicking on a specific color will continue to move to a color similar to that color.
But I want to move only objects that have exactly the color I clicked, not similar colors.</p>

<p>I finished 'learning processing' example10 only. So I do not know those video related codes...
i'm korean but all lecture videos are in English....so i can't understand...
therefore i question here....</p>
]]></description>
   </item>
   </channel>
</rss>