<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with detect() - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=detect%28%29</link>
      <pubDate>Sun, 08 Aug 2021 19:07:33 +0000</pubDate>
         <description>Tagged with detect() - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/taggeddetect%28%29/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>Cant get image from webcam</title>
      <link>https://forum.processing.org/two/discussion/26817/cant-get-image-from-webcam</link>
      <pubDate>Tue, 13 Mar 2018 21:41:57 +0000</pubDate>
      <dc:creator>Pjlons83</dc:creator>
      <guid isPermaLink="false">26817@/two/discussions</guid>
      <description><![CDATA[<p>Hi,</p>

<p>I have my first processing sketch running with no errors but I cannot get an image from the webcam.</p>

<pre><code>import hypermedia.video.*;
import java.awt.Rectangle;

OpenCV opencv;

// contrast/brightness values
int contrast_value    = 0;
int brightness_value  = 0;



void setup() {

    size( 320, 240 );

    opencv = new OpenCV( this );
    opencv.capture( width, height );                   // open video stream
    opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&gt; front face detection : "haarcascade_frontalface_alt.xml"


    // print usage
    println( "Drag mouse on X-axis inside this sketch window to change contrast" );
    println( "Drag mouse on Y-axis inside this sketch window to change brightness" );

}


public void stop() {
    opencv.stop();
    super.stop();
}



void draw() {

    // grab a new frame
    // and convert to gray
    opencv.read();
    opencv.convert( GRAY );
    opencv.contrast( contrast_value );
    opencv.brightness( brightness_value );

    // proceed detection
    Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

    // display the image
    image( opencv.image(), 0, 0 );

    // draw face area(s)
    noFill();
    stroke(255,0,0);
    for( int i=0; i&lt;faces.length; i++ ) {
        rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); 
    }
}



/**
 * Changes contrast/brigthness values
 */
void mouseDragged() {
    contrast_value   = (int) map( mouseX, 0, width, -128, 128 );
    brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
</code></pre>

<p>The programme is supposed to display an image, detect a face and draw a rectangle around the face. I am using windows 7 with a USB webcam. I have tried a couple of versions of processing and both are the same.</p>

<p>Any ideas of where to look first?</p>

<p>Thanks
Paul</p>
]]></description>
   </item>
   <item>
      <title>Face recognition - to visualise hidden background</title>
      <link>https://forum.processing.org/two/discussion/25830/face-recognition-to-visualise-hidden-background</link>
      <pubDate>Fri, 05 Jan 2018 10:36:45 +0000</pubDate>
      <dc:creator>ViciousR</dc:creator>
      <guid isPermaLink="false">25830@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone,</p>

<p>Thank you for looking at my post and hope you can help me. I have a university project and I need to have a hidden background image, and then using face recognition to reveal this image (depending on the location of your face). Here is where I've got so far.</p>

<p>Can't seem to make the background disappear. Thank you in advance</p>

<pre><code>import processing.video.*;
import gab.opencv.*;
import java.awt.*;

PImage photo;
int pixelcount;
color pixelcolor;

Capture firstcam;
OpenCV opencv;

void setup() {
  fullScreen();

  firstcam=new Capture(this, 1600, 900);
  opencv=new OpenCV(this, 1600, 900);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  firstcam.start();
  frameRate(24);
  noFill();

  colorMode(HSB, 360, 100, 100);
  strokeWeight(1);
  smooth();
  photo = loadImage("photo.jpg");
}

void draw() {
  opencv.loadImage(firstcam);
  Rectangle[] faces = opencv.detect();
  image(photo, 0, 0, 1600, 900);
  pushMatrix();
  translate(width, 0);
  scale(-1, 1);

  for (int i = 0 ; i &lt; faces.length ; i++) {
    ellipse(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    stroke(255,0,0);
    lights();
    smooth();
    strokeWeight(1.5);
  }

  popMatrix();
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Get biggest face from openCV</title>
      <link>https://forum.processing.org/two/discussion/25492/get-biggest-face-from-opencv</link>
      <pubDate>Sun, 10 Dec 2017 11:05:56 +0000</pubDate>
      <dc:creator>martinusbar</dc:creator>
      <guid isPermaLink="false">25492@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>This is my first post on this forum, and i'm in need of your help. I'm working on a project with an android that has a set of animatronic eyes and a facetracking feature. I'm using the opencv library and Processing 3.3.6 to execute the tracking, and an arduino t control the eyes. At the moment i've created a script where the eyes are following only ONE face, but sometimes the eyes 'jump' when a new face enters the webcam. I would like to avoid this, so my reasoning was to always get the biggest width of the faces detected and send the x and y coordinates to the arduino. I found similar questions on the forum to 'Get the largest element from an array' but although i understand the logic my sketch keeps on outputting all sets of x and y coordinates detected. A supplementary note is in place that i work heuristically with code and have very basic knowledge of programming languages. Any push in the right direction is highly appreciated. Below the processing code:</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    import processing.serial.*;

    Capture video;
    OpenCV opencv;
    Serial myPort;  // Create object from Serial class

    int newXpos, newYpos;
    //These variables hold the x and y location for the middle of the detected face
    int midFaceX = 0;
    int midFaceY = 0;

    void setup() {
      size(640, 480);
      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

     //println(Serial.list()); // List COM-ports (Use this to figure out which port the Arduino is connected to)
      String portName = Serial.list()[1];
      //select first com-port from the list (change the number in the [] if your sketch fails to connect to the Arduino)
      myPort = new Serial(this, portName, 19200);   //Baud rate is set to 19200 to match the Arduino baud rate.

      video.start();
    }


    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0, 40);
      strokeWeight(3);
      Rectangle[] faces = opencv.detect();

      int maxValueFace = 0;
      int maxIndex = -1;

      for (int i = 0; i &lt; faces.length; i++ ) {

        if (faces[i].width &gt; maxValueFace) {
          maxIndex = i;
          maxValueFace = faces[i].width;
          //println(maxValueFace);

        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); //
        midFaceX = faces[i].x + (faces[i].width/2); // middle of the face
        midFaceY = faces[i].y + (faces[i].height/2); // middle of the face
        float xpos = map(midFaceX, 0, width, 90, 120); //maps range of servos L-&gt;R
        float ypos = map(midFaceY, 0, height, 90, 120); //maps range of servos U-&gt;D
        int newXpos = (int)xpos; //converts position X float into integer
        int newYpos = (int)ypos; //converts position Y float into integer
        myPort.write(newXpos+"x"); // send X coordinate to Arduino
        myPort.write(newYpos+"y"); // send Y coordinate to Arduino
        println(midFaceX + "," + midFaceY);
        }
      }
    }

    void captureEvent(Capture c) {
      c.read();
    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to trigger an action with face detection?</title>
      <link>https://forum.processing.org/two/discussion/25482/how-to-trigger-an-action-with-face-detection</link>
      <pubDate>Sat, 09 Dec 2017 23:44:10 +0000</pubDate>
      <dc:creator>arnolds112</dc:creator>
      <guid isPermaLink="false">25482@/two/discussions</guid>
      <description><![CDATA[<p>Hello,
Is it possible to use face detection to trigger a video to play?
if webcam detects a face, video plays
im having trouble with achieving this</p>
]]></description>
   </item>
   <item>
      <title>Interactive image using Face detection (OpenCV)</title>
      <link>https://forum.processing.org/two/discussion/25470/interactive-image-using-face-detection-opencv</link>
      <pubDate>Sat, 09 Dec 2017 11:05:54 +0000</pubDate>
      <dc:creator>Pharaonn</dc:creator>
      <guid isPermaLink="false">25470@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>there is a program I would like to make with a face detector using OpenCV. So basically you would have an image A (could be a video) that when a face is close enough to the camera, slowly morphs into an image B. 
So far, I got to make the program change the images whether or not the face is recognized but now I would like to : 
1) tell the face detector to detect the face only when it's around 3feet (1meter) away so the image can change only in that moment.
2) make the "image changing" really smooth and progressive (maybe if the two images merge together using opacity or something ?)</p>

<p>I am new to Processing and even more to OpenCv that's why I would be so gload if someone has the solution or can help me !</p>

<p>I quoted my code if it helps… The images can be replaced (btw, they are scaled up when the program runs and I don't understand why… that's another problem but not the most important one)</p>

<blockquote class="Quote">
  <p>import gab.opencv.*; 
  import processing.video.*; 
  import java.awt.Rectangle;
   PImage image;
   PImage flou;
   PImage nette;
  Capture cam; 
  OpenCV opencv; 
  Rectangle[] faces;
   
  void setup() { 
    fullScreen(); 
    background (0, 0, 0); 
    cam = new Capture( this, 640, 480, 30); 
    cam.start(); 
    opencv = new OpenCV(this, cam.width, cam.height); 
    opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
    image=loadImage("image.jpg");
    nette=loadImage("nette.jpg");
    flou=loadImage("flou.jpg");
  }
   
  void draw() { 
    opencv.loadImage(cam); 
    faces = opencv.detect(); 
    image(cam, 0, 0); 
   
    if (faces!=null) { 
      for (int i=0; i&lt; faces.length; i++) { 
        image(nette,0,0);
        noFill(); 
        stroke(255, 255, 0); 
        strokeWeight(10);<br />
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    } 
    if (faces.length&lt;=0) { 
      textAlign(CENTER); 
      fill(255, 0, 0); 
      textSize(56); 
      println("no faces");
      image(flou,0,0);
      text("UNDETECTED", 200, 100);
    }
  }
   
  void captureEvent(Capture cam) { 
    cam.read();}</p>
</blockquote>

<p>Thank you so much !!</p>
]]></description>
   </item>
   <item>
      <title>Optimizing OpenCV + video capture sketch on RasPi</title>
      <link>https://forum.processing.org/two/discussion/25328/optimizing-opencv-video-capture-sketch-on-raspi</link>
      <pubDate>Sat, 02 Dec 2017 03:54:26 +0000</pubDate>
      <dc:creator>dryd3418</dc:creator>
      <guid isPermaLink="false">25328@/two/discussions</guid>
      <description><![CDATA[<p>Hi,</p>

<p>Fist time poster. Sorry it's a long one.</p>

<p>I am attempting to port over a sketch that I originally built on my Mac (1.6GHz core i5/8mb RAM), over to a RasPi2 and I am experiencing an unexpectedly <em>dramatic</em> loss in video performance. I'm looking for expectations, opinions, and any advice to get this thing working smoothly on a RasPi2. I get that the RasPi is obviously a much, much less powerful computer than my laptop, but gohai's SimpleCapture example worked so smoothly, that I hoped an OpenCV layer on top would also run smoothly. I tinkered with allocating more GPU memory and overclocking the Pi, but without any noticeable improvements.</p>

<p>The original script tracks faces and <a rel="nofollow" href="https://dylanmryder.files.wordpress.com/2017/06/clip2.gif?w=700">animates eyes to "watch" the viewer via a webcam</a>. My original code is here, using Video and OpenCV libraries.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

float mouthLength = 50;
float mouthX = 120;
float mouthY = 175;
float leftPupilX;
float leftPupilY;
float rightPupilX;
float rightPupilY;
int radius = 40;   // Radius of white eyeball ellipse
float pupilSize = 20;

PVector leftEye = new PVector(100, 100);
PVector rightEye = new PVector(200, 100);

int x, y = 120;
float easing = 0.2;
int scaleFactor = 3;

int counter;

void setup() {
  size(960, 720);
  smooth();

  video = new Capture(this, 960/scaleFactor, 720/scaleFactor);
  opencv = new OpenCV(this, 960/scaleFactor, 720/scaleFactor);  
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  //opencv.loadCascade(OpenCV.CASCADE_PROFILEFACE);
  //opencv.loadCascade(OpenCV.CASCADE_EYE); 
  video.start();
  frameRate(24);
}

void draw() {
  background(255, 255, 0); // Yellow
  scale(scaleFactor);

  opencv.loadImage(video);
  opencv.flip(OpenCV.HORIZONTAL); // flip horizontally
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  strokeWeight(3);

  leftPupilX = leftPupilX + (100 - leftPupilX) * easing;
  rightPupilX = rightPupilX + (200 - rightPupilX) * easing;
  leftPupilY = rightPupilY = leftPupilY + (100 - leftPupilY) * easing;


  for (int i = 0; i &lt; faces.length; i++) {
    //println(faces[i].x + "," + faces[i].y);
    noFill();
    stroke(0, 255, 0); // face detection rectangle color
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

    if (faces[i].x &lt; 80 ) {
      leftPupilX = (leftPupilX + (faces[i].x - leftPupilX) * easing);// + (faces[i].width * 0.2);
      rightPupilX = leftPupilX + 100;
    } 

    if ( faces[i].x &gt; 175) {
      rightPupilX = rightPupilX + (faces[i].x - rightPupilX) * easing;// + (faces[i].width * 0.2);
      leftPupilX = rightPupilX - 100;
    } 

    if ( (faces[i].y &gt; 120) || (faces[i].y &lt; 30) ) {
      leftPupilY = leftPupilY + (faces[i].y - leftPupilY) * easing;
      rightPupilY = rightPupilY + (faces[i].y - rightPupilY) * easing;
    }
  }

  // Mouth
  noFill();
  stroke(0);
  line(mouthX, mouthY, mouthX + mouthLength, mouthY);
  arc(mouthX-15, mouthY, 30, 30, radians(-30), radians(30));  // left cheek
  arc(mouthX+65, mouthY, 30, 30, radians(145), radians(205));  // right cheek

  // Eyes
  fill(255); // white
  ellipse(leftEye.x, leftEye.y, radius+25, radius + 25); //  left eyeball ellipse
  ellipse(rightEye.x, rightEye.y, radius+25, radius + 25); //  left eyeball ellipse

  PVector leftPupil = new PVector(leftPupilX, leftPupilY);
  if (dist(leftPupil.x, leftPupil.y, leftEye.x, leftEye.y) &gt; radius/2) {
    leftPupil.sub(leftEye);
    leftPupil.normalize();
    leftPupil.mult(radius/2);
    leftPupil.add(leftEye);
  }

  PVector rightPupil = new PVector(rightPupilX, rightPupilY);
  if (dist(rightPupil.x, rightPupil.y, rightEye.x, rightEye.y) &gt; radius/2) {
    rightPupil.sub(rightEye);
    rightPupil.normalize();
    rightPupil.mult(radius/2);
    rightPupil.add(rightEye);
  }

  // Actually draw the pupils
  noStroke();
  fill(0); // black pupil color
  ellipse(leftPupil.x, leftPupil.y, pupilSize, pupilSize); // new left pupil
  ellipse(rightPupil.x, rightPupil.y, pupilSize, pupilSize); // new right pupil

  counter ++;
  println(counter);
  if (counter &gt; 195) {
    counter = 0;
  }
  if (counter &gt;= 190 &amp;&amp; counter &lt; 195) {
    blink();
  }
}

void captureEvent(Capture c) {
  c.read();
}

void blink() {
  fill(255, 255, 0); // Yellow
  stroke(255, 255, 0);
  ellipse(leftEye.x, leftEye.y, radius+26, radius + 26); //  left eyeball ellipse
  ellipse(rightEye.x, rightEye.y, radius+26, radius + 26);
  stroke(0);
  noFill();
  line(67, leftEye.y, 133, leftEye.y);
  translate(100, 0);
  line(67, leftEye.y, 133, leftEye.y);
}
</code></pre>

<p>This version I modified for the RasPi2 with gohai's GLVideo library. I can get the sketch to run, but the tracking AND/OR responsive animation are incredibly slow. Unfortunately, to the point that it ruins the interactive nature of the work.</p>

<pre><code>import gab.opencv.*;
import gohai.glvideo.*;
import java.awt.*;

GLCapture video;
OpenCV opencv;

float mouthLength = 50;
float mouthX = 120;
float mouthY = 175;
float leftPupilX;
float leftPupilY;
float rightPupilX;
float rightPupilY;
int radius = 40;   // Radius of white eyeball ellipse
float pupilSize = 20;

PVector leftEye = new PVector(100, 100);
PVector rightEye = new PVector(200, 100);

int x, y = 120;
float easing = 0.2;
int scaleFactor = 3;

int counter;

void setup() {
  size(960, 720, P2D);
  smooth();

  String[] devices = GLCapture.list();
  println("Devices:");
  printArray(devices);
  if (0 &lt; devices.length) {
    String[] configs = GLCapture.configs(devices[0]);
    println("Configs:");
    printArray(configs);
  }

  video = new GLCapture(this, devices[0], 960/scaleFactor, 720/scaleFactor);
  opencv = new OpenCV(this, 960/scaleFactor, 720/scaleFactor);  
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  //opencv.loadCascade(OpenCV.CASCADE_PROFILEFACE);
  //opencv.loadCascade(OpenCV.CASCADE_EYE); 
  video.start();
  frameRate(24);
}

void draw() {
  background(255, 255, 0); // Yellow
  scale(scaleFactor);

  if (video.available()) {
    video.read();
    opencv.loadImage(video);
    opencv.flip(OpenCV.HORIZONTAL); // flip horizontally
    Rectangle[] faces = opencv.detect();
    //println(faces.length);

    strokeWeight(3);

    leftPupilX = leftPupilX + (100 - leftPupilX) * easing;
    rightPupilX = rightPupilX + (200 - rightPupilX) * easing;
    leftPupilY = rightPupilY = leftPupilY + (100 - leftPupilY) * easing;


    for (int i = 0; i &lt; faces.length; i++) {
      //println(faces[i].x + "," + faces[i].y);
      //noFill();
      //stroke(0, 255, 0); // face detection rectangle color
      //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

      if (faces[i].x &lt; 80 ) {
        leftPupilX = (leftPupilX + (faces[i].x - leftPupilX) * easing);// + (faces[i].width * 0.2);
        rightPupilX = leftPupilX + 100;
      } 

      if ( faces[i].x &gt; 175) {
        rightPupilX = rightPupilX + (faces[i].x - rightPupilX) * easing;// + (faces[i].width * 0.2);
        leftPupilX = rightPupilX - 100;
      } 

      if ( (faces[i].y &gt; 120) || (faces[i].y &lt; 30) ) {
        leftPupilY = leftPupilY + (faces[i].y - leftPupilY) * easing;
        rightPupilY = rightPupilY + (faces[i].y - rightPupilY) * easing;
      }
    }

    // Mouth
    noFill();
    stroke(0);
    line(mouthX, mouthY, mouthX + mouthLength, mouthY);
    arc(mouthX-15, mouthY, 30, 30, radians(-30), radians(30));  // left cheek
    arc(mouthX+65, mouthY, 30, 30, radians(145), radians(205));  // right cheek

    // Eyes
    fill(255); // white
    ellipse(leftEye.x, leftEye.y, radius+25, radius + 25); //  left eyeball ellipse
    ellipse(rightEye.x, rightEye.y, radius+25, radius + 25); //  left eyeball ellipse

    PVector leftPupil = new PVector(leftPupilX, leftPupilY);
    if (dist(leftPupil.x, leftPupil.y, leftEye.x, leftEye.y) &gt; radius/2) {
      leftPupil.sub(leftEye);
      leftPupil.normalize();
      leftPupil.mult(radius/2);
      leftPupil.add(leftEye);
    }

    PVector rightPupil = new PVector(rightPupilX, rightPupilY);
    if (dist(rightPupil.x, rightPupil.y, rightEye.x, rightEye.y) &gt; radius/2) {
      rightPupil.sub(rightEye);
      rightPupil.normalize();
      rightPupil.mult(radius/2);
      rightPupil.add(rightEye);
    }

    // Actually draw the pupils
    noStroke();
    fill(0); // black pupil color
    ellipse(leftPupil.x, leftPupil.y, pupilSize, pupilSize); // new left pupil
    ellipse(rightPupil.x, rightPupil.y, pupilSize, pupilSize); // new right pupil

    counter ++;
    println(counter);
    if (counter &gt; 195) {
      counter = 0;
    }
    if (counter &gt;= 190 &amp;&amp; counter &lt; 195) {
      blink();
    }
  }
}

void captureEvent(GLCapture c) {
  c.read();
}

void blink() {
  fill(255, 255, 0); // Yellow
  stroke(255, 255, 0);
  ellipse(leftEye.x, leftEye.y, radius+26, radius + 26); //  left eyeball ellipse
  ellipse(rightEye.x, rightEye.y, radius+26, radius + 26);
  stroke(0);
  noFill();
  line(67, leftEye.y, 133, leftEye.y);
  translate(100, 0);
  line(67, leftEye.y, 133, leftEye.y);
}
</code></pre>

<p>The animation is super smooth on my Mac, but achingly slow and jerky on my RasPi2.</p>

<p>Any advice is greatly appreciated. Is the sketch just too much for a RasPi2 to run smoothly? Is my code just too inefficient? Is OpenCV an issue here? Ultimately, I want to make this a standalone gallery installation with a monitor and RasPi subtly attached, so I don't have to run it off an expensive laptop left alone in a gallery space.</p>
]]></description>
   </item>
   <item>
      <title>Why can't I use other cascades in OpenCV library</title>
      <link>https://forum.processing.org/two/discussion/25021/why-can-t-i-use-other-cascades-in-opencv-library</link>
      <pubDate>Wed, 15 Nov 2017 08:44:29 +0000</pubDate>
      <dc:creator>blaintom</dc:creator>
      <guid isPermaLink="false">25021@/two/discussions</guid>
      <description><![CDATA[<p>I try to use camera and opencv library to detect my fist.</p>

<p>I can use the cascade file in example code , but when i tried to use other code on internet , it doesn't work.</p>

<p>for example : <a href="https://github.com/Aravindlivewire/Opencv/tree/master/haarcascade" target="_blank" rel="nofollow">https://github.com/Aravindlivewire/Opencv/tree/master/haarcascade</a></p>

<p>I downloaded another aGest.xml from different place, it works well.</p>

<p>The code looks like the same as the code in the link above. but  cascade file in Aravindlivewire's github can not be applied to my program.</p>

<p>Here is my code :</p>

<p>import gab.opencv.*;
   import java.awt.Rectangle;
   import KinectPV2.*;
   KinectPV2 kinect;
   OpenCV opencv;</p>

<p>Rectangle[] faces;</p>

<p>PImage apples;</p>

<p>void setup() {</p>

<pre><code> opencv = new OpenCV(this, 480,270);

 size(960, 540, P3D);

 kinect = new KinectPV2(this);

 kinect.enableColorImg(true);

 kinect.init();

 //opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

 opencv.loadCascade("aGest.xml");  

 apples = new PImage(480,270,RGB);
</code></pre>

<p>}</p>

<p>void draw() {</p>

<pre><code> background(0);

 PImage img = kinect.getColorImage();

 apples.copy(img,0,0,1920,1080,0,0,480,270);

 opencv.loadImage(apples);

 opencv.useColor();

 image(opencv.getSnapshot(), 0, 0);

 faces = opencv.detect();

 noFill();

 stroke(0, 255, 0);

 strokeWeight(3);

 for (int i = 0; i &lt; faces.length; i++) {

   rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

   println(faces[i].x +" "+ faces[i].y);
</code></pre>

<p>}
}</p>
]]></description>
   </item>
   <item>
      <title>Processing+OpenCV - FaceDetection on a Movie</title>
      <link>https://forum.processing.org/two/discussion/24824/processing-opencv-facedetection-on-a-movie</link>
      <pubDate>Tue, 31 Oct 2017 20:41:47 +0000</pubDate>
      <dc:creator>Jawah</dc:creator>
      <guid isPermaLink="false">24824@/two/discussions</guid>
      <description><![CDATA[<p>I am using Processing with the OpenCV Lib and wanted to rewrite the example Code from the creators Git so that instead of doing Face Detection on a Camera Capture I'll load a video (.mp4).</p>

<p>Link to the Git and the example Code (which is working): <a rel="nofollow" href="https://github.com/atduskgreg/opencv-processing/blob/master/examples/LiveCamTest/LiveCamTest.pde">Link</a></p>

<p>Here is my Sketch:</p>

<pre><code>import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;

OpenCV opencv;
Movie myMovie;
Rectangle[] faces;

void setup() {
  size(480, 270);

  myMovie = new Movie(this, "people3.mp4");
  myMovie.loop();
  opencv = new OpenCV(this, myMovie.width, myMovie.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}

void movieEvent(Movie myMovie) {
  myMovie.read();
}

void draw() {

  background(0);
  if (myMovie.available()) {    

    opencv.loadImage(myMovie);
    faces = opencv.detect();
    image(myMovie, 0, 0);

    if (faces != null) {
      for (int i = 0; i &lt; faces.length; i++) {
        strokeWeight(2);
        stroke(255, 0, 0);
        noFill();
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }
  }
}
</code></pre>

<p>What I'm getting is an
<strong>IndexOutOfBoundsException: Index: 3, Size: 0</strong>
at openCV.loadImage(myMovie) and I don't know why.</p>

<p>Appreciating any help! :-)</p>
]]></description>
   </item>
   <item>
      <title>OpenCV kill old Faces loop</title>
      <link>https://forum.processing.org/two/discussion/24820/opencv-kill-old-faces-loop</link>
      <pubDate>Tue, 31 Oct 2017 16:40:17 +0000</pubDate>
      <dc:creator>corbinyo</dc:creator>
      <guid isPermaLink="false">24820@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,
I am using openCV and face detection to control the transparency of images. The position of the detected face on the x axis controls transparency. What I would like to do is be able to ignore the other faces that get picked up by the webcam. Is there a way to get the value of face 1 and ignore face 2, face 3, face 4 etc., but upon face 1 being killed, make face 2 = face 1, face 3 = face 2 etc., and constantly only use the data from face 1 as the controlling parameter?</p>

<p>I have looked into the OpenCV example (Example: Whichface) and I can't seem to wrangle it to what i need it for.
Here is my existing code</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;

    Capture video;
    OpenCV opencv;

    PImage ed;
    PImage genie;
    int offset = 0;

    float easing = 0.05;

    void setup() {

    size(724,960);


      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
      video.start();
      ed = loadImage("ed.jpg");
      genie = loadImage("genie.jpg");

    }




    void draw() {

      scale(1);
      opencv.loadImage(video);

      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);

       Rectangle[] faces = opencv.detect();
      println(faces.length);




    for (int i = 0; i &lt; faces.length; i++) {



 tint(255, 230);  // Display at half opacity
image(genie, 0, 0);  // Display at full opacity


  int dx = (faces[i].x-genie.width/2) - offset;
  offset += dx * easing;
  tint(255, faces[i].x);  // Display at half opacity
  image(ed, 0, 0);



   }
} 





void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to improve OpenCV performance on ARM?</title>
      <link>https://forum.processing.org/two/discussion/24529/how-to-improve-opencv-performance-on-arm</link>
      <pubDate>Fri, 13 Oct 2017 16:00:05 +0000</pubDate>
      <dc:creator>Isaac96</dc:creator>
      <guid isPermaLink="false">24529@/two/discussions</guid>
      <description><![CDATA[<p>Hi guys</p>

<p>I am making a face-tracking Nerf blaster using OpenCV on the Raspberry Pi. I am using a Microsoft LifeCam webcam for capture input and the SoftwareServo class for blaster control. However, my code runs at 1-2 FPS on the Pi (Pi 3 model B). I am currently using scale to improve performance, but the code still runs at 1 FPS. Additionally, the servos are extremely jittery. I am powering the servos using a 2A 5V regulator. The Pi is powered off a 2A USB supply. The grounds <em>are</em> connected. Does anyone know how to improve performance? Maybe a different CV library?</p>

<p>Thanks for input!
Code:</p>

<pre><code>import processing.io.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*; 

PImage img;
Rectangle[] faceRect; 

Capture cam;
OpenCV opencv; 
SoftwareServo panServo;
SoftwareServo trigServo;

int widthCapture=320; 
int heightCapture=240;
int fpsCapture=30; 
int panpos=90;
int firePos = 80;
int readyPos = 0;
long time;
int wait = 500;

int targetCenterX;
int targetCenterY;

int threshold = 20;
int thresholdLeft;
int thresholdRight;
int moveIncrement = 2;


int circleExpand = 20;
int circleWidth = 3;

boolean isFiring = false;
boolean isFound = false;
boolean manual = false;

void setup()
{ 
  size (320, 240); 
  frameRate(fpsCapture); 
  background(0);
  panServo = new SoftwareServo(this);
  trigServo = new SoftwareServo(this);
  panServo.attach(17);
  trigServo.attach(4);

  cam = new Capture(this, widthCapture, heightCapture);
  cam.start(); 

  opencv = new OpenCV(this, widthCapture, heightCapture); 
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}

void  draw() 
{
  if (millis() - time &gt;= wait)
  {
    trigServo.write(readyPos);
    isFiring = false;
  }
  if (isFiring) 
  {
    trigServo.write(firePos);
    tint(255, 0, 0);
  } else
  {
    trigServo.write(readyPos);
    noTint();
  }
  if (cam.available() == true) 
  { 
    cam.read();  
    img = cam.get(); 

    opencv.loadImage(img);

    image(img, 0, 0);
    blend(img, 0, 0, widthCapture, heightCapture, 0, 0, widthCapture, heightCapture, HARD_LIGHT);
    faceRect = opencv.detect();
  }

  stroke(255, 255, 255);
  strokeWeight(1);
  thresholdLeft = (widthCapture/2)-threshold;
  thresholdRight =  (widthCapture/2)+threshold;

  stroke(255, 255, 255, 128);
  strokeWeight(1);
  line(thresholdLeft, 0, thresholdLeft, heightCapture); //left line
  line(thresholdRight, 0, thresholdRight, heightCapture); //right line

  if ((faceRect != null) &amp;&amp; (faceRect.length != 0))
  {
    isFound = true;
    //Get center point of identified target
    targetCenterX = faceRect[0].x + (faceRect[0].width/2);
    targetCenterY = faceRect[0].y + (faceRect[0].height/2);    

    //Draw circle around face
    noFill();
    strokeWeight(circleWidth);
    stroke(255, 255, 255);
    ellipse(targetCenterX, targetCenterY, faceRect[0].width+circleExpand, faceRect[0].height+circleExpand);
    if (!manual) {
      //Handle rotation
      if (targetCenterX &lt; thresholdLeft)
      {
        panpos -=  moveIncrement;
        //delay(70);
      }
      if (targetCenterX &gt; thresholdRight)
      {
        panpos+=  moveIncrement;
        //delay(70);
      }

      //Fire
      if ((targetCenterX &gt;= thresholdLeft) &amp;&amp; (targetCenterX &lt;= thresholdRight))
      {
        isFiring = true;
        println("Gotem");
        noFill();
      }
    }
  }
}
void keyPressed() {
  if (key == 'm') {
    manual = !manual;
    println("manual mode toggled");
    isFiring = false;
  } else if (key == 'a' &amp;&amp; manual) {
    panpos-= moveIncrement;
    println("left");
  } else if (key == 'f' &amp;&amp; manual) {
    isFiring = !isFiring;
  } else if (key == 'd' &amp;&amp; manual) {
    panpos+= moveIncrement;
    println("right");
  } else if (key == 'c' )
  {
    panServo.write(90);
  } else {
    println(key);
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>OpenCV on Raspberry Pi - Face Detection sketch errors</title>
      <link>https://forum.processing.org/two/discussion/24157/opencv-on-raspberry-pi-face-detection-sketch-errors</link>
      <pubDate>Mon, 18 Sep 2017 10:34:34 +0000</pubDate>
      <dc:creator>EmRod</dc:creator>
      <guid isPermaLink="false">24157@/two/discussions</guid>
      <description><![CDATA[<p>I am looking to have a raspberry pi with a working openCV face detection sketch. I have used a similar sketch on my normal PC. When I brought it over to my Pi I had to download glvideo library and change a few things. However now when I run the sketch it get the error "Width (0) and height (0) cannot be &lt;=0" - I've tried changing a few things with no luck. Any know what to do? Cheers (The aim of the sketch is to detect faces on camera and when there are no faces to display text - this works on my PC.)</p>

<pre><code>import gab.opencv.*;
import gohai.glvideo.*;
import java.awt.Rectangle;

GLCapture video;
OpenCV opencv;
Rectangle[] faces;

void setup() {
  size(320, 240, P2D);

  String[] devices = GLCapture.list();
  println("Devices:");
  printArray(devices);
  if (0 &lt; devices.length) {
    String[] configs = GLCapture.configs(devices[0]);
    println("Configs:");
    printArray(configs);
  }

  // this will use the first recognized camera by default
  video = new GLCapture(this, devices[0], 320, 240, 30);

  // you could be more specific also, e.g.
  //video = new GLCapture(this, devices[0]);
  //video = new GLCapture(this, devices[0], 640, 480, 25);
  //video = new GLCapture(this, devices[0], configs[0]);

  video.start();
  opencv = new OpenCV(this, video.width, video.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}

void draw() {
  opencv.loadImage(video);
  faces = opencv.detect();
  background(0,0,0);
  image(video, 320, 240);
  if (video.available()) {
    video.read();
  }
  if (faces!=null) {
    for (int i=0; i&lt; faces.length; i++) {
      noFill();
      stroke(255, 255, 0);
      strokeWeight(10);
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }
  if (faces==null) {
    rect(100,100, 100, 100);
    textAlign(CENTER);
    fill(255, 0, 0);
    textSize(56);
    text("UNDETECTED", 100, 100);

}}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Face detection - OpenCV: If clauses to act when no face is detected - HELP PLEASE!</title>
      <link>https://forum.processing.org/two/discussion/24052/face-detection-opencv-if-clauses-to-act-when-no-face-is-detected-help-please</link>
      <pubDate>Thu, 07 Sep 2017 09:52:28 +0000</pubDate>
      <dc:creator>EmRod</dc:creator>
      <guid isPermaLink="false">24052@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,</p>

<p>I'm working on a face detection code that will put a square around your face when its detected, but will display text or an image when no face is detected on screen. Ideally I'd like it to respond after a couple of second of no detection but anything is better than nothing!</p>

<p>I currently have the face detection code working. But when trying to add an if clause for null faces make an action - it doesn't work when I run the code - any help is greatly appreciated.</p>

<p>Here is the current code:</p>

<p>import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;</p>

<p>Capture cam;
OpenCV opencv;
Rectangle[] faces;</p>

<p>void setup() {
  size(640, 480, P2D);
  background (0, 0, 0);
  cam = new Capture( this, 640, 480, 30);
  cam.start();
  opencv = new OpenCV(this, cam.width, cam.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}</p>

<p>void draw() {
  opencv.loadImage(cam);
  faces = opencv.detect();
  image(cam, 0, 0);
  if (faces!=null) {
    for (int i=0; i&lt; faces.length; i++) {
      noFill();
      stroke(255, 255, 0);
      strokeWeight(10);
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }
  if (faces == null) {
    textAlign(CENTER);
    fill(255, 0, 0);
    textSize(56);
    text("UNDETECTED", 100, 100);
  }
}</p>

<p>void captureEvent(Capture cam) {
  cam.read();
}</p>
]]></description>
   </item>
   <item>
      <title>Processing and OpenCV / Class don't match</title>
      <link>https://forum.processing.org/two/discussion/23486/processing-and-opencv-class-don-t-match</link>
      <pubDate>Mon, 17 Jul 2017 18:52:23 +0000</pubDate>
      <dc:creator>pd_design1</dc:creator>
      <guid isPermaLink="false">23486@/two/discussions</guid>
      <description><![CDATA[<p>hi,</p>

<p>I am a noob of java and one error i don't understand is <strong>java.awt.Rectangle[]" does not match with "Rectangle[]"</strong></p>

<p>my all code is here :</p>

<pre><code>import hypermedia.video.*;

OpenCV opencv;

void setup() {

    size( 320, 240 );

    opencv = new OpenCV(this);
    opencv.capture( width, height );
    opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );   // load the FRONTALFACE description fil

}

void draw() {
     class Rectangle {
        };

    opencv.read();
    image( opencv.image(), 0, 0 );

    // detect anything ressembling a FRONTALFACE
    Rectangle[] faces = opencv.detect();

    // draw detected face area(s)
    noFill();
    stroke(255,0,0);
    for( int i=0; i&lt;faces.length; i++ ) {
        rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height ); 
    }
}
</code></pre>

<p>I want include why don't match.</p>
]]></description>
   </item>
   <item>
      <title>Recognize a snare drum sound ?</title>
      <link>https://forum.processing.org/two/discussion/23155/recognize-a-snare-drum-sound</link>
      <pubDate>Wed, 21 Jun 2017 10:38:48 +0000</pubDate>
      <dc:creator>glhd2</dc:creator>
      <guid isPermaLink="false">23155@/two/discussions</guid>
      <description><![CDATA[<p>Hello guys,</p>

<p>I am doing an project where I am displaying information on a snare drum only (not a all drum kit!). My goal is to give simple feedbacks to the user using a microphone as detection tool. My projection look like a clock and when the main hand reach a point, the player should hit the drum. I would like to detect if he hits the drum on time or not and if he hits the drum or not. 
My projection is in yellow on a black background, it will highlight in blue if he didn't hit on time and in red if he didn't hit the drum (or if he hits anything which is not the drum).</p>

<p>I tried with minim and beatdetect but unfortunatelly the "isSnare" doesn't work as it should be ... It returns true even if I am whistling or hitting my desk...</p>

<p>Should I go in an FFT analysis ? I am up for any advices .. I don't really know where to start.. Should I record the drum sound and then test in real-time if the hit is equal to the recorded sound?</p>

<p>Here is my code for now:</p>

<pre><code>import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
BeatDetect beat;
AudioInput in;


float eRadius;

void setup()
{
  size(200, 200, P3D);
  minim = new Minim(this);

  in = minim.getLineIn(Minim.STEREO, 512);
  // a beat detection object SOUND_FREQUENCY based on my mic
  beat = new BeatDetect(in.bufferSize(), in.sampleRate());

}

void draw()
{
  background(0);
  beat.detect(in.mix);

  println("isSnare: "+ beat.isSnare());
  //println("isHat: "+ beat.isHat());
  //println("isKick: " + beat.isKick());
  //println("isOnset: "+ beat.isOnset());

}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Flip Webcam opencv</title>
      <link>https://forum.processing.org/two/discussion/23090/flip-webcam-opencv</link>
      <pubDate>Fri, 16 Jun 2017 14:51:25 +0000</pubDate>
      <dc:creator>Apoel_95</dc:creator>
      <guid isPermaLink="false">23090@/two/discussions</guid>
      <description><![CDATA[<p>Hi, as you can see from the code I was able to reverse the webcam, but how do I make the opencv with the webcam flipped?
I would like to do face detection on the flip webcam</p>

<p>`import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;</p>

<p>Capture cam;
OpenCV opencv;</p>

<p>void setup() {
  size(640, 480);</p>

<p>cam = new Capture(this, width, height, 30);
  opencv = new OpenCV(this, width, height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);<br />
  cam.start();
}</p>

<p>void draw() {
  if (cam.available() == true) {
    cam.read();
  }</p>

<p>pushMatrix();
  scale(-1, 1);
  translate(-cam.width, 0);
  image(cam, 0, 0); 
  popMatrix();</p>

<p>opencv.loadImage(cam);
  Rectangle[] faces = opencv.detect();</p>

<p>noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}`</p>
]]></description>
   </item>
   <item>
      <title>Lighting up an LED from Processing to Arduino</title>
      <link>https://forum.processing.org/two/discussion/22256/lighting-up-an-led-from-processing-to-arduino</link>
      <pubDate>Thu, 27 Apr 2017 19:55:03 +0000</pubDate>
      <dc:creator>TannlerE</dc:creator>
      <guid isPermaLink="false">22256@/two/discussions</guid>
      <description><![CDATA[<p>My project is a face tracking webcam and I want it to turn on an LED whenever it recognizes a face but can not figure out the if statement to accomplish this. My Processing code is as follows. Any help would be greatly appreciated.</p>

<pre><code>import hypermedia.video.*;  //Include the video library to capture images from the webcam
import java.awt.Rectangle;  //A rectangle class which keeps track of the face coordinates.
import processing.serial.*; //The serial library is needed to communicate with the Arduino.

OpenCV opencv;  //Create an instance of the OpenCV library.

//Screen Size Parameters
int width = 320;
int height = 240;

// contrast/brightness values
int contrast_value    = 0;
int brightness_value  = 0;

Serial port; // The serial port

//Variables for keeping track of the current servo positions.
char servoTiltPosition = 90;
char servoPanPosition = 90;
//The pan/tilt servo ids for the Arduino serial command interface.
char tiltChannel = 0;
char panChannel = 1;

//These variables hold the x and y location for the middle of the detected face.
int midFaceY=0;
int midFaceX=0;

//The variables correspond to the middle of the screen, and will be compared to the midFace values
int midScreenY = (height/2);
int midScreenX = (width/2);
int midScreenWindow = 10;  //This is the acceptable 'error' for the center of the screen. 

//The degree of change that will be applied to the servo each time we update the position.
int stepSize=1;

void setup() {
  //Create a window for the sketch.
  size( width, height );

  opencv = new OpenCV( this );
  opencv.capture( width, height );                   // open video stream
  opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );  // load detection description, here-&gt; front face detection : "haarcascade_frontalface_alt.xml"

  println(Serial.list()); // List COM-ports (Use this to figure out which port the Arduino is connected to)

  //select first com-port from the list (change the number in the [] if your sketch fails to connect to the Arduino)
  port = new Serial(this, Serial.list()[0], 57600);   //Baud rate is set to 57600 to match the Arduino baud rate.

  // print usage
  println( "Drag mouse on X-axis inside this sketch window to change contrast" );
  println( "Drag mouse on Y-axis inside this sketch window to change brightness" );

  //Send the initial pan/tilt angles to the Arduino to set the device up to look straight forward.
  port.write(tiltChannel);    //Send the Tilt Servo ID
  port.write(servoTiltPosition);  //Send the Tilt Position (currently 90 degrees)
  port.write(panChannel);         //Send the Pan Servo ID
  port.write(servoPanPosition);   //Send the Pan Position (currently 90 degrees)
}


public void stop() {
  opencv.stop();
  super.stop();
}



void draw() {
  // grab a new frame
  // and convert to gray
  opencv.read();
  opencv.convert( GRAY );
  opencv.contrast( contrast_value );
  opencv.brightness( brightness_value );

  // proceed detection
  Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

  // display the image
  image( opencv.image(), 0, 0 );

  // draw face area(s)
  noFill();
  stroke(255,0,0);
  for( int i=0; i&lt;faces.length; i++ ) {
    rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
  }

  //Find out if any faces were detected.
  if(faces.length &gt; 0){
    //If a face was found, find the midpoint of the first face in the frame.
    //NOTE: The .x and .y of the face rectangle corresponds to the upper left corner of the rectangle,
    //      so we manipulate these values to find the midpoint of the rectangle.
    midFaceY = faces[0].y + (faces[0].height/2);
    midFaceX = faces[0].x + (faces[0].width/2);

    //Find out if the Y component of the face is below the middle of the screen.
    if(midFaceY &lt; (midScreenY - midScreenWindow)){
      if(servoTiltPosition &gt;= 5)servoTiltPosition += stepSize; //If it is below the middle of the screen, update the tilt position variable to lower the tilt servo.
    }
    //Find out if the Y component of the face is above the middle of the screen.
    else if(midFaceY &gt; (midScreenY + midScreenWindow)){
      if(servoTiltPosition &lt;= 175)servoTiltPosition -=stepSize; //Update the tilt position variable to raise the tilt servo.
    }
    //Find out if the X component of the face is to the left of the middle of the screen.
    if(midFaceX &lt; (midScreenX - midScreenWindow)){
      if(servoPanPosition &gt;= 5)servoPanPosition -= stepSize; //Update the pan position variable to move the servo to the left.
    }
    //Find out if the X component of the face is to the right of the middle of the screen.
    else if(midFaceX &gt; (midScreenX + midScreenWindow)){
      if(servoPanPosition &lt;= 175)servoPanPosition +=stepSize; //Update the pan position variable to move the servo to the right.
    }

  }
  //Update the servo positions by sending the serial command to the Arduino.
  port.write(tiltChannel);      //Send the tilt servo ID
  port.write(servoTiltPosition); //Send the updated tilt position.
  port.write(panChannel);        //Send the Pan servo ID
  port.write(servoPanPosition);  //Send the updated pan position.
  delay(1);
}



/**
 * Changes contrast/brigthness values
 */
void mouseDragged() {
  contrast_value   = (int) map( mouseX, 0, width, -128, 128 );
  brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Speeding up Open CV detect face</title>
      <link>https://forum.processing.org/two/discussion/21971/speeding-up-open-cv-detect-face</link>
      <pubDate>Thu, 13 Apr 2017 09:12:24 +0000</pubDate>
      <dc:creator>CharlesDesign</dc:creator>
      <guid isPermaLink="false">21971@/two/discussions</guid>
      <description><![CDATA[<p>Hi All,</p>

<p>I'm trying to implement a real time face detection using openCV from the IR image of the kinect, unfortunately it takes my sketch from 60fps down to 6fps. I am aware kinectPV2 does face detection but it's nowhere near as good as openCV.
Can someone suggest a solution? I've tried the multithreaded "your are einstein" sketch but I couldn't get it to run.</p>

<pre><code>import KinectPV2.*;
import gab.opencv.*;
import java.awt.Rectangle;

KinectPV2 kinect;

FaceData [] faceData;

OpenCV opencv;
Rectangle[] faces;

PImage img;

void setup() {
  size(1000, 500, P2D);

  opencv = new OpenCV(this, 512, 424);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  kinect = new KinectPV2(this);

  //for face detection base on the infrared Img
  kinect.enableInfraredImg(true);

  //enable face detection
  kinect.enableFaceDetection(true);

  kinect.enableDepthImg(true);

  kinect.init();
}

void draw() {
  background(0);
  img = kinect.getInfraredImage(); //512 424


  opencv.loadImage(img);
  faces = opencv.detect();



  image(img, 0, 0);
  image(kinect.getDepthImage(), img.width, 0);

  fill(255);
  text("frameRate "+frameRate, 50, 50);

  noFill();
  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to save multiple gif files without overwriting it onto one?</title>
      <link>https://forum.processing.org/two/discussion/21052/how-to-save-multiple-gif-files-without-overwriting-it-onto-one</link>
      <pubDate>Tue, 28 Feb 2017 10:58:49 +0000</pubDate>
      <dc:creator>nasiham</dc:creator>
      <guid isPermaLink="false">21052@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>So I created a code where a person can take a gif animation of themselves with an effect applied on the webcam. The problem is that the gif keeps overwriting the file and keeps replacing the same file. I tried so many things to try and get it to save into a new file, but nothing is working. Your help is much appreciated (and as soon as possible, our submission is soon!)</p>

<p>This is the code:</p>

<pre><code> import gifAnimation.*;
GifMaker gifExport;
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.io.FilenameFilter;

int FRAME_RATE=30;
boolean record=false;
int frames=0;
int totalFrames = 120;
int frameLimit = 30;
int FRAMES_DURATION = 10;
int nbGif;
// Size of each cell in the grid
int cellSize = 8;
// Number of columns and rows in our system
int cols, rows;
// Variable for capture device
Capture video;
OpenCV opencv;
// Variable for capture device
int numPixels;
int[] previousFrame;



void setup() {
  size(640, 480);
  frameRate(24);
  cols = width / cellSize;
  rows = height / cellSize;
  colorMode(RGB, 128,128,128);

  // This the default video input, see the GettingStartedCapture 
  // example if it creates an error
  video = new Capture(this, width, height);
  opencv = new OpenCV(this, width, height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  // Start capturing the images from the camera
  video.start();  

   numPixels = video.width * video.height;
  // Create an array to store the previously captured frame
  previousFrame = new int[numPixels];
  loadPixels();

  background(0);
}


void draw() { 
    nbGif = howManyGif();
  if (record) {
    recordGif();
  } else {
    effect();
  }

  /////////////////////FACE DETECTION////////////////////
  opencv.loadImage(video);
 noFill();
  stroke(0, 255, 0);
  strokeWeight(1);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
 //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
   effect();
  }

  }



void captureEvent(Capture c) {
  c.read();
}

  /////////////////////////EFFECT STARTS HERE///////////////////////////


  void effect() {

  //if (video.available()) {
    video.read();
    video.loadPixels();

    // Begin loop for columns
    for (int i = 0; i &lt; cols; i++) {
      // Begin loop for rows
      for (int j = 0; j &lt; rows; j++) {

        // Where are we, pixel-wise?
        int x = i*cellSize;
        int y = j*cellSize;
        int loc = (video.width - x - 1) + y*video.width; // Reversing x to mirror the image

        float r = red(video.pixels[loc]);
        float g = green(video.pixels[loc]);
        float b = blue(video.pixels[loc]);
        // Make a new color with an alpha component
        color c = color(r, g, b, 128);

        // Code for drawing a single rect
        // Using translate in order for rotation to work properly
        pushMatrix();
        translate(x+cellSize/2, y+cellSize/2);
        // Rotation formula based on brightness
        rotate((2 * PI * brightness(128) / 255.0));
        rectMode(CENTER);
        fill(c);
        stroke(0);
        // Rects are larger than the cell for some overlap
        rect(0, 0, cellSize+6, cellSize+6);
        popMatrix();
      }
    }
  }
  void recordGif() {

    int x = 0;
  if (record == true) {
    // CHECK NUMBER OF FILES IN 'IMG' DIRECTORY &amp; CREATE NEW FILE
**    gifExport = new GifMaker(this, "img/image"+(nbGif+1)+".gif");
**    gifExport.setRepeat(0); // make it an "endless" animation

    // RECORD FRAMES UNTIL FRAMELIMIT
    for (frames=0; frames&lt;frameLimit; frames++) {
      effect();
      gifExport.setDelay(FRAMES_DURATION);
      gifExport.addFrame();
      println("saving frame");
    } // end loop

    // STOP RECORDING AND SAVE FILE
    if (frames==frameLimit) {
      gifExport.finish();
**      println("img/file"+(nbGif+1)+".gif WAS SAVED - RE INITIALIZING");
**      noLoop();
    } // end if frameLimit
  } // end if launchRecording

  // RE INIT
  frames=0;
  record = false;
  loop();
  println("end record");
} // END RECORD GIF

////////////////////////////////////////// RETURN ALL FILES AS STRING ARRAY  
String[] listFileNames(String dir) {
  File file = new File(dir);
  if (file.isDirectory()) {
    String names[] = file.list();
    return names;
  } else {
    // If it's not a directory
    return null;
  }
}


  static final FilenameFilter FILTER = new FilenameFilter() {
  static final String NAME = "file", EXT = ".gif";

  @ Override boolean accept(File path, String name) {
    return name.startsWith(NAME) &amp;&amp; name.endsWith(EXT);
  }
};
//////////////////////////////////////////////////////// HOW MANY GIFS
int howManyGif() {
  File dataFolder = dataFile("");
  String[] theList = dataFolder.list(FILTER);
  int fileCount = theList.length;
  return(fileCount);



  }
  void export() {
  if(frames &lt; totalFrames) {
    gifExport.setDelay(20);
    gifExport.addFrame();
    frames++;
  } else {
    gifExport.finish();
    frames++;
    println("gif saved");
    exit();}
  }

void mouseReleased() {
    record = true;
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to track an image moving?</title>
      <link>https://forum.processing.org/two/discussion/20130/how-to-track-an-image-moving</link>
      <pubDate>Sat, 07 Jan 2017 15:45:23 +0000</pubDate>
      <dc:creator>Markx</dc:creator>
      <guid isPermaLink="false">20130@/two/discussions</guid>
      <description><![CDATA[<p>Hello i'm new to processing and i'm working on a little game. But i got a problem:</p>

<p>I don't know how to tell the position (x,y) of my image "train" (in draw section) in order to make my background ("fond"
) move .</p>

<p>Thanks in advance.</p>

<pre><code>import processing.sound.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
int score = 5;
int temps;//la valeur du chronomètre

boolean depart;//commencer le compte à rebours (au clic)

float cx, cy;
boolean alive = true;
float trainX;
float trainY;


Rectangle[] faces;
Capture cam;

float x;
float y;
float easing = 1;


PImage fond;
PImage train;
PImage smiley;

void setup() {
  size(1000, 1000);

 video = new Capture(this, 640, 480);
  opencv = new OpenCV(this, 640, 480);

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  video.start();
  faces = opencv.detect();

  fond = loadImage("Map.bmp");
  train = loadImage("train.gif");  
  smiley = loadImage("train.gif");

}

void draw() {
  scale (0.5);
  image(fond, x, y);
  scale(0.5);
  opencv.loadImage(video);
    imageMode(CENTER);

  // afficher l'image de la webcam 

  Rectangle[] faces = opencv.detect();
  for (int i = 0; i &lt; faces.length; i++) {
   float x = faces[i].x + faces[i].width / 2;
    float y = faces[i].y + faces[i].height / 2;
    image(smiley, x, y, 300, 300);

}

float targetX = xtrain;
  float dx = targetX - x;
  x += dx * easing;

  float targetY = ytrain;
  float dy = targetY - y;
  y += dy * easing;


  //texte compte à rebours

  fill(#585858);
  text("TEMPS RESTANT :", 1170, 30);
  fill(#FF0000);
  text(temps, 1310, 30);
}



void captureEvent(Capture c) {
  c.read();
}

//fonction compte à rebours  
void compte_rebours() {
  if (depart ==true) {
    if (temps == 0 ) {
      temps = 0;
    }
    else {
      temps = 60 - millis()/1000; //compte à rebours à partir de 60
    }
  }
}  
</code></pre>
]]></description>
   </item>
   <item>
      <title>How can I let my face grow?</title>
      <link>https://forum.processing.org/two/discussion/19452/how-can-i-let-my-face-grow</link>
      <pubDate>Thu, 01 Dec 2016 17:17:30 +0000</pubDate>
      <dc:creator>Adinda123</dc:creator>
      <guid isPermaLink="false">19452@/two/discussions</guid>
      <description><![CDATA[<p><strong>I am doing a project about obesitas.</strong></p>

<p>I am using my webcam, then I used the face tracker. I copied my face into a rectangle.
What I want is my face (rectangle) is getting bigger and bigger within 2/3 minutes.</p>

<p><em>I hope somebody can help me with that! :)</em></p>

<p>Below is my code:</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
PImage cam;

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();
}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);


  for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    copy(cam, faces[i].x, faces[i].y, faces[i].width, faces[i].height, faces[i].x - faces[i].width/2, faces[i].y, faces[i].width * 2,             
faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
  cam = c.get();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to use ultrasonic sensor to trigger a video ? (Arduino+processing)</title>
      <link>https://forum.processing.org/two/discussion/19293/how-to-use-ultrasonic-sensor-to-trigger-a-video-arduino-processing</link>
      <pubDate>Fri, 25 Nov 2016 18:17:22 +0000</pubDate>
      <dc:creator>eliseber</dc:creator>
      <guid isPermaLink="false">19293@/two/discussions</guid>
      <description><![CDATA[<p>Hey ! 
First time I'm using Arduino and Processing at the same time and I'm completely lost ! 
I'm designing an interactive installation, and I need the trigger the position in a video with the distance : as we get closer to the sensor, the video will move. 
I've set up a code for Arduino which gives the distance. 
For processing, I found a code which triggers the position with the webcam (and the size of the head of the spectator, as he gets closer, his head become bigger, and the position moves forward). Do you have any idea/ thoughts about how I can modify this code ? It's the same idea, with arduino instead of the webcam but I don't know where to start...</p>

<h2>Thank you very much !</h2>

<p>Arduino :</p>

<pre><code>#include &lt;NewPing.h&gt;
#include &lt;Servo.h&gt;


#define TRIGGER_PIN  12  // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN     11  // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 200 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

int LEDpin = 13;
Servo myservo;
int val;

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.

void setup() {
  Serial.begin(9600); // Open serial monitor at 115200 baud to see ping results.
  pinMode(LEDpin, OUTPUT);
  myservo.attach(9);// attaches servo to pin 9

}

void loop() {
  delay(400);                      // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings.
  unsigned int uS = sonar.ping(); // Send ping, get ping time in microseconds (uS).
  //Serial.print("Ping: ");
  Serial.println(uS / US_ROUNDTRIP_CM); // Convert ping time to distance in cm and print result (0 = outside set distance range)
  //Serial.println("cm");
  //if(uS / US_ROUNDTRIP_CM &lt; 100) {digitalWrite(LEDpin, HIGH);}
  //else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
 // else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
  //delay (200);

  val = (uS / US_ROUNDTRIP_CM);
  val = map(val, 0, 172, 15, 180);
  myservo.write(val);
  delay (150);


}
</code></pre>

<hr />

<p>PROCESSING</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
Capture video;
OpenCV opencv;
//---
Movie monSuperFilm;// déclaration du film
float positionDuFilmEnSecondes;//position ds le film
Integer surfaceVisages,surfaceMin,surfaceMax;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void setup() {
  //----------------
  size(1080, 720);//dimensions de votre film en pixels; mettre la définition de votre film
  //-----------------
  // partie video cam
  //-----------------
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  video.start();
  //-----------------
  // partie film
  //-----------------
    monSuperFilm = new Movie(this,"pattern1080-6low-1.mov");// chargement du film ; mettre le nom de votre film qui est dans le dossier data
    monSuperFilm.loop();
  //-----------------
  // partie visages
  //-----------------
  // plus la surface du visage est grande dans l'image retournée par la caméra du mac et plus on est près
  // le programme cumule les surfaces de tous les visages capturés par la caméra; à essayer à plusieurs devant le mac !
    surfaceMin=400;// min 0 //surface du visage sur la camera qui correspond au début du film
    surfaceMax=1500;// max 640x480=307200 //surface du visage sur la camera qui correspond à la fin du film
}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void draw() {
  //background(255);
  //scale(2);
  opencv.loadImage(video);

  //image(video, 0, 0 );

  //noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);
//---------------------------------------------
// calcul de la surface occupée par les visages
//---------------------------------------------
surfaceVisages=0;
  for (int i = 0; i &lt; faces.length; i++) {
    //println(faces[i].x + "," + faces[i].y);
    //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    surfaceVisages=surfaceVisages+faces[i].width*faces[i].height;
  }
  fill(0);
  text(str(surfaceMin)+" / "+str(surfaceVisages)+" / "+str(surfaceMax),10,50);
   monSuperFilm.read();//on lit l'image du film 
  positionDuFilmEnSecondes=map(surfaceVisages,surfaceMin,surfaceMax,0,monSuperFilm.duration()); 
  monSuperFilm.jump(positionDuFilmEnSecondes);// on se déplace dans le film
  image(monSuperFilm, 0, 0);//on affiche l'image courante du film

}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Creating Pointillism effect on an image</title>
      <link>https://forum.processing.org/two/discussion/19181/creating-pointillism-effect-on-an-image</link>
      <pubDate>Sun, 20 Nov 2016 20:21:11 +0000</pubDate>
      <dc:creator>huimc</dc:creator>
      <guid isPermaLink="false">19181@/two/discussions</guid>
      <description><![CDATA[<p>Hello, I am currently working on a project in Processing to develop an interface by using face detection to view and interact with an image. When users entered to modify mode, the image will modify the image by adding pointillism effect on it with the face region. The small circle should be retained when the position indicator moves away. I am currently testing the function by using mouse movement. However, the small circle didn't retain and it seems like they were covered by the original image.</p>

<p>Any suggestion? I can't see what went wrong since I didn't place the image after drawing the circle.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

PImage img; 
PImage imgBlur, imgBefore, imgPoint, imgPointAfter;
Capture cam;
OpenCV opencv;
OpenCV imgOpenCV;

int camWidth = 320;
int camHeight = 240;

int mode = 0; //0: default; 1: view; 2: modify; 3: replace;

void setup() {

  //hint(DISABLE_DEPTH_SORT);
  size(1000, 647);
  cam = new Capture(this, 320, 240);  
  cam.start();
  opencv = new OpenCV(this,320,240);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  imageMode(CENTER);
  img = loadImage("selfies.jpg");  // Load the image into the program
  imgPoint = img.get(0,0,width,height);
  imgBlur = loadImage("selfies.jpg");
  imgBlur.filter(BLUR, 5);


}

void draw() {

  if (cam.available()) {

    image(img, img.width/2, img.height/2);  // Displays the image at its actual size at point (0,0)
    cam.read();  
    opencv.loadImage(cam);  
    image(cam, width - cam.width/2, height - cam.height/2);

    Rectangle[] faces = opencv.detect();
    noFill();
    stroke(0,255,0);
    strokeWeight(3);

    if (faces.length&gt;0) {

        if (mode == 1) {

         image(imgBlur, 0, 0);
         imgBefore = img.get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
         image(imgBefore,faces[0].x,faces[0].y);

        } else if ( mode == 2) {

          noStroke();
          loadPixels();
          imgPointAfter = imgPoint.get(mouseX,mouseY,100,100);
          int x = (int)random(mouseX,mouseX+imgPointAfter.width);
          int y = (int)random(mouseY,mouseY+imgPointAfter.height);
          int i = x + imgPoint.width*y;
          color c = pixels[i];
          fill(c);
          ellipse(x, y, 17,17);
        }

        for( int i=0; i&lt;faces.length; i++) {      

          rect(width - camWidth + faces[i].x, height - camHeight + faces[i].y, faces[i].width, faces[i].height); // cam face detect
          rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // position indicator

          }

        } else if (mode == 1) {

          image(imgBlur, 0, 0);
      }
  }
}

void keyPressed(){
  switch (key) {
    case 'v':
      mode = 1;
      break;
    case 'm':
      mode = 2;
      break;
    case 'r':
      mode = 3;
      break;
    case 'e':
      mode = 0;
      break;
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>why is my program running on the wrong side of screen?</title>
      <link>https://forum.processing.org/two/discussion/18926/why-is-my-program-running-on-the-wrong-side-of-screen</link>
      <pubDate>Mon, 07 Nov 2016 19:02:20 +0000</pubDate>
      <dc:creator>Jai</dc:creator>
      <guid isPermaLink="false">18926@/two/discussions</guid>
      <description><![CDATA[<p>what im looking to do is run face detection on the right side of the window only not display the cam on the right side while processing on the left side which is whats happening.</p>

<pre><code>void setup() {
  size(1280, 480, P3D);

  video2 = new Capture(this, 1280/2, 480, "webcam");

  opencv = new OpenCV(this, 1280/2, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video2.start();
}

void R()
{
  scale(1);
  opencv.loadImage(video2);
  image(video2, 1280/2, 0);

  noFill();
  stroke(0, 255, 0);
  strokeWeight(1);
  Rectangle[] faces = opencv.detect();

  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
}
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Multiple songs + multiple beatlisteners</title>
      <link>https://forum.processing.org/two/discussion/18910/multiple-songs-multiple-beatlisteners</link>
      <pubDate>Sun, 06 Nov 2016 13:57:49 +0000</pubDate>
      <dc:creator>DBeyond</dc:creator>
      <guid isPermaLink="false">18910@/two/discussions</guid>
      <description><![CDATA[<p>Hi everyone!
I'm learning Processing for the first time for one of my modules at university.
The task is to essentially create something with processing, and I did a small equalizer reminiscing of the old windows media player with the weird effects.
I seem to keep getting in line 38 NullPointerExpection error when I added multiple beatlisteners since it is part of what's randomizing the graphics.</p>

<p>I managed to get the multiples files playing once, but with only 1 beatlistener working. So when I changed to the other equalizer screen it didn't (beat).</p>

<p>I've tried multiple things, but can't find the solution.
Can anyone spot where the problem lies?</p>

<p>(The initial problem is fixed, I was missing the .mp3 in the song description)</p>

<p>I am now back to my other problem after managing to get multiple songs to load yesterday, the beatlistener only "listens and beats" to one of the songs, in the current code it only does it to playlist [2]. When I choose the other songs the color stops changing and the graphics stop animating,</p>

<p>Any way to make the beatlistener read all the 3 songs?</p>

<p>Thank you!</p>

<p>Main window</p>

<pre><code>                                   import ddf.minim.*;
                    import ddf.minim.analysis.*;

                    Minim minim;
                    AudioPlayer[] playlist;
                    AudioPlayer player;
                    AudioInput input;
                    BeatDetect beat;
                    BeatListener bl;

                    float unit, theta;
                    float kickSize, snareSize, hatSize;
                    float r = random(0, 500);
                    int pageNumber = 1;
                    int num = 50, frames=180;
                    int radius = 40; 
                    int sides = 10;
                    int depth = 0; 
                    PWindow win;

                    public void settings() {
                      size(500, 500);
                    }

                    void setup() {
                      win = new PWindow();
                      minim = new Minim(this);
                      unit = width/num; 
                      noStroke();


                      playlist = new AudioPlayer [3];
                      playlist[0] = minim.loadFile("the_trees.mp3");
                      playlist[1] = minim.loadFile("marcus_kellis_theme.mp3");
                      playlist[2] = minim.loadFile("eternal_snowflake.mp3");


                      beat = new BeatDetect(playlist[1].bufferSize(), playlist[1].sampleRate());
                      beat.setSensitivity(50);  
                      kickSize = snareSize = hatSize = 1600;
                      bl = new BeatListener(beat, playlist[1]);

                        beat = new BeatDetect(playlist[0].bufferSize(), playlist[0].sampleRate());
                      beat.setSensitivity(50);  
                      kickSize = snareSize = hatSize = 1600;
                      bl = new BeatListener(beat, playlist[0]);

                      beat = new BeatDetect(playlist[2].bufferSize(), playlist[2].sampleRate());
                      beat.setSensitivity(50);  
                      kickSize = snareSize = hatSize = 1600;
                      bl = new BeatListener(beat, playlist[2]);



                    }
                    void draw() {

                      if (keyPressed) {
                        if (key == 'j')
                          playlist[0].play();
                        else
                          playlist[0].pause();

                        if (keyPressed) 
                          if (key == 'k')
                            playlist[1].play();
                          else
                            playlist[1].pause();

                        if (keyPressed) 
                          if (key == 'l')
                            playlist[2].play();
                          else
                            playlist[2].pause();
                      }
                      if (pageNumber == 1) {
                        background(0);
                        for (int y=0; y&lt;=num; y++) {
                          for (int x=0; x&lt;=num; x++) {


                            if (keyPressed) {
                              if (key == 'r')
                                fill(255, 0, 0); //sphere colour
                            }

                            if (keyPressed) {
                              if (key == 'g')
                                fill(0, 255, 0);
                            }  

                            if (keyPressed) {
                              if (key == 'b')
                                fill(0, 0, 255);
                            }
                            if (beat.isHat()) {
                              fill(random(0, 255), random(0, 255), random(0, 255));
                              radius = int(random(1, 100)); // randomly choose radius for sphere
                              depth = int(random(1, 100)); // randomly set forward/backward translation distance of sphere
                              // test if beat is snare

                              if (beat.isSnare()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(10, 200)); // randomly choose radius for sphere
                                depth = int(random(10, 100)); // randomly set forward/backward translation distance of sphere
                              }  
                              // test id beat is Hat
                              if (beat.isKick()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(10, 500)); // randomly choose radius for sphere
                                depth = int(random(25, 220)); // randomly set forward/backward translation distance of sphere
                              }
                            }

                            pushMatrix();
                            float distance = dist(width/2, height/2, x*unit, y*unit);
                            float offSet = map(distance, 56, sqrt(sq(width/2)+sq(height/17)), 0, TWO_PI);
                            float sz = map(sin(theta+distance), 1, 10, unit*.2, unit*.1);
                            float angle = atan2(y*unit-height/11, x*unit-width/2);
                            float px = map(sin(angle+offSet+theta), 110, 56, 18, 159);
                            translate(251, -115);
                            rotate(random(271)); //rotates to a random angle
                            ellipse(random(-209, 730), random(-767, 105), -2, random(-59, 35)); //godcode
                            popMatrix();
                          }


                          theta -= TWO_PI/frames;
                        }
                      }


                      if (pageNumber == 2) {
                        background(0);
                        for (int y=50; y&lt;=num; y++) {
                          for (int x=41; x&lt;=num; x++) {

                            if (keyPressed) {
                              if (key == 'r')
                                fill(255, 0, 0); //sphere colour
                            }

                            if (keyPressed) {
                              if (key == 'g')
                                fill(0, 255, 0);
                            }  

                            if (keyPressed) {
                              if (key == 'b')
                                fill(0, 0, 255);
                            }
                            if (beat.isHat()) {
                              fill(random(0, 255), random(0, 255), random(0, 255));
                              radius = int(random(1, 50)); // randomly choose radius for sphere
                              depth = int(random(1, 505)); // randomly set forward/backward translation distance of sphere
                              // test if beat is snare

                              if (beat.isSnare()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(0, 500)); // randomly choose radius for sphere
                                depth = int(random(0, 100)); // randomly set forward/backward translation distance of sphere
                              }  
                              // test id beat is Hat
                              if (beat.isKick()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(1, 500)); // randomly choose radius for sphere
                                depth = int(random(23, 220)); // randomly set forward/backward translation distance of sphere
                              }
                            }

                            pushMatrix();
                            float distance = dist(width/2, height/2, x*unit, y*unit);
                            float offSet = map(distance, 53, sqrt(sq(width/2)+sq(height/17)), 0, TWO_PI);
                            float sz = map(sin(theta+distance), 1, 10, unit*.2, unit*.1);
                            float angle = atan2(y*unit-height/2, x*unit-width/2);
                            float px = map(sin(angle+offSet+theta), 2, 29, 50, 152);
                            translate(245, 245);
                            rotate(random(59)); //rotates to a random angle
                            ellipse(random(102, 74), random(31, 20), 4, random(599, 234)); //godcode
                            popMatrix();
                          }


                          theta -= TWO_PI/frames;
                        }
                      }

                      if (pageNumber == 3) {
                        background(0);
                        for (int y=0; y&lt;=num; y++) {
                          for (int x=0; x&lt;=num; x++) {

                            if (keyPressed) {
                              if (key == 'r')
                                fill(255, 0, 0); //sphere colour
                            }

                            if (keyPressed) {
                              if (key == 'g')
                                fill(0, 255, 0);
                            }  

                            if (keyPressed) {
                              if (key == 'b')
                                fill(0, 0, 255);
                            }
                            if (beat.isHat()) {
                              fill(random(0, 255), random(0, 255), random(0, 255));
                              radius = int(random(1, 500)); // randomly choose radius for sphere
                              depth = int(random(1, 500)); // randomly set forward/backward translation distance of sphere
                              // test if beat is snare

                              if (beat.isSnare()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(1, 500)); // randomly choose radius for sphere
                                depth = int(random(1, 500)); // randomly set forward/backward translation distance of sphere
                              }  
                              // test id beat is Hat
                              if (beat.isKick()) {
                                fill(random(0, 255), random(0, 255), random(0, 255));
                                radius = int(random(1, 500)); // randomly choose radius for sphere
                                depth = int(random(1, 500)); // randomly set forward/backward translation distance of sphere
                              }
                            }

                            pushMatrix();
                            float distance = dist(width/2, height/2, x*unit, y*unit);
                            float offSet = map(distance, 10, sqrt(sq(width/2)+sq(height/2)), 0, TWO_PI);
                            float sz = map(sin(theta+distance), 1, 10, unit*.2, unit*.1);
                            float angle = atan2(y*unit-height/2, x*unit-width/2);
                            float px = map(sin(angle+offSet+theta), -1, 10, 0, 100);
                            translate(x*unit, y*unit);
                            rotate(r*angle); //rotates to a random angle
                            ellipse(px, 0, sz, sz); //godcode
                            popMatrix();
                          }


                          theta -= TWO_PI/frames;
                        }
                      }
                    }


                    void keyPressed() {
                      if (key == '1') {
                        redraw();
                        pageNumber = 1;
                      }
                      if (key == '2') {
                        redraw();
                        pageNumber = 2;
                      }
                      if (key == '3') {
                        redraw();
                        pageNumber = 3;
                      }
                    }
                    void mousePressed() {
                      println("mousePressed in primary window");
                    }  
</code></pre>

<p>BeatListener window</p>

<pre><code>            class BeatListener implements AudioListener
            {
              private BeatDetect beat;
              private AudioPlayer source;

              BeatListener(BeatDetect beat, AudioPlayer source)
              {
                this.source = source;
                this.source.addListener(this);
                this.beat = beat;
              }

              void samples(float[] samps)
              {
                beat.detect(source.mix);
              }

              void samples(float[] sampsL, float[] sampsR)
              {
                beat.detect(source.mix);

              }
            }
</code></pre>
]]></description>
   </item>
   <item>
      <title>opencv problem with bootcamp</title>
      <link>https://forum.processing.org/two/discussion/18536/opencv-problem-with-bootcamp</link>
      <pubDate>Thu, 13 Oct 2016 22:27:38 +0000</pubDate>
      <dc:creator>jayson</dc:creator>
      <guid isPermaLink="false">18536@/two/discussions</guid>
      <description><![CDATA[<p>I'm trying to do face detection on a video with opencv on bootcamp (windows 10) and processing 2.2.1</p>

<blockquote class="Quote">
  <p>import processing.video.*;</p>
  
  <p>Movie video;
  import gab.opencv.*;
  import java.awt.Rectangle;</p>
  
  <p>OpenCV opencv;
  Rectangle[] faces;</p>
  
  <p>void setup()
  {
    size(640,360);</p>
  
  <p>video = new Movie(this, "people.mp4");
    video.play();
    video.loop();
  <br />
     opencv = new OpenCV(this, video.width, video.height);
     opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  }</p>
  
  <p>void draw()
  {
    background(0);
    fill(255);
  <br />
    image(video,0,0);
  <br />
    opencv.loadImage(video);
    faces = opencv.detect();
  <br />
  }</p>
  
  <p>void movieEvent(Movie m)
  {
      m.read();
  <br />
  }</p>
</blockquote>

<p>It crashes at the line: 
  opencv.loadImage(video);</p>

<p>With the message: IndexOutOfBoundsException: Index 3: Size 0</p>

<p>Any ideas? This works fine on mac osx with the same version of processing.</p>
]]></description>
   </item>
   <item>
      <title>OpenCV - How can I ad a different random image on every face?</title>
      <link>https://forum.processing.org/two/discussion/18457/opencv-how-can-i-ad-a-different-random-image-on-every-face</link>
      <pubDate>Sat, 08 Oct 2016 10:06:13 +0000</pubDate>
      <dc:creator>kiwiwi</dc:creator>
      <guid isPermaLink="false">18457@/two/discussions</guid>
      <description><![CDATA[<p>I want to write a Code, where every face gets another random image out of 10 images I have.
I struggle to separate the different faces. Every time I try, the same image appears on all faces.
Would be great, if someone can give me a hint how to separate the faces for the Code.</p>

<p>You will see, I was a bit desperate. I see the mistake, its a big one I know. But I can't see the solution.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
 int num = 3;
 PImage[] myImageArray = new PImage[num];
 Capture video;
 OpenCV opencv;

 void setup() {

for (int i=0; i&lt;myImageArray.length; i++){             
   myImageArray[i] = loadImage( str(i) + ".png");
 }

  size(800, 600);
   video = new Capture(this, 800/2, 600/2);
   opencv = new OpenCV(this, 800/2, 600/2);
   opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

   video.start();
 }

 void draw() {
   scale(2);
  opencv.loadImage(video);
   image(video, 0, 0 );

  Rectangle[] faces = opencv.detect();
   println(faces.length);                    //print number of faces
   for (int i = 0; i &lt; faces.length; i++) {
     println(faces[i].x + "," + faces[i].y); //print position(x/y) of faces 
  image(myImageArray[(int)random(num)], faces[i].x-70, faces[i].y-60, faces[i].width+80, faces[i].height+80);
 }
}

 void captureEvent(Capture c) {
   c.read();
 }
</code></pre>
]]></description>
   </item>
   <item>
      <title>how to make the image appear for a few moment ?</title>
      <link>https://forum.processing.org/two/discussion/17308/how-to-make-the-image-appear-for-a-few-moment</link>
      <pubDate>Sun, 26 Jun 2016 10:14:58 +0000</pubDate>
      <dc:creator>so0ofy</dc:creator>
      <guid isPermaLink="false">17308@/two/discussions</guid>
      <description><![CDATA[<p>I have this code but I can't figure how to make the image appear for more time in the screen .. So it should not disappear quickly ..Can you help please?</p>

<pre><code>for (int i = 0; i &lt; fullbody.length; i++) {
  println(fullbody[i].x + "," + fullbody[i].y);
  image(myImageArray[(int) random(10)],fullbody[i].x, fullbody[i].y, fullbody[i].width, fullbody[i].height);
  smooth();
  delay(10);
} 
</code></pre>
]]></description>
   </item>
   <item>
      <title>Integrate these two codes?</title>
      <link>https://forum.processing.org/two/discussion/16589/integrate-these-two-codes</link>
      <pubDate>Sat, 14 May 2016 01:30:19 +0000</pubDate>
      <dc:creator>ellewin</dc:creator>
      <guid isPermaLink="false">16589@/two/discussions</guid>
      <description><![CDATA[<p>Hello Processing Forum folks,</p>

<p>I am working on a project that uses 3 webcams that when looked at, will play a video. I am thinking of the screens as entities that need to be acknowledged before they communicate with someone.</p>

<p>Everything was going dandy until I ran into two hiccups.</p>

<p>One is that opencv.loadImage(camright); seems to the culprit of this error "width(0) and height (0) cannot be &lt;=0" which doesn't make any sense to me because there are opencv.loadImage(camleft); and opencv.loadImage(camcenter); before it and they don't seem to be returning the same issue.</p>

<p>The second hiccup is that I am trying to use keystone so that I can projection map these videos to hanging plexi... I just can't seem to figure out how to get the videos onto the keystone 'mesh'.  Did that make sense?</p>

<p>Anyways, I am very green (taking my first class) and any help would be appreciated immensely. Especially since this project is due next week...</p>

<p>Below is my code... (I really couldn't get the whole code thing to work- I tried- I am sorry-if you want to help me with that too that'd be nice)</p>

<pre><code>//LIBRARIES 
import deadpixel.keystone.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*; 
import java.awt.Rectangle;

//keystone stuff 
Keystone ks;
CornerPinSurface surfaceleft;
CornerPinSurface surfacecenter;
CornerPinSurface surfaceright;
PGraphics offscreenleft;
PGraphics offscreencenter;
PGraphics offscreenright;

// movie object to play and pause later
//there will be three videos playing... 
Movie myMovieleft;
Movie myMoviecenter;
Movie myMovieright;

OpenCV opencv;

//<a href="https://processing.org/discourse/beta/num_1221233526.html" target="_blank" rel="nofollow">https://processing.org/discourse/beta/num_1221233526.html</a>
//<a href="https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams" target="_blank" rel="nofollow">https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams</a>
Capture camleft;
Capture camcenter;
Capture camright;

String[] captureDevices;  

void setup() {
  //this will println listing the webcams you need to but the number on the left in the [] 
  //to make them work 
  printArray(Capture.list());
  background(0);
  size(2640, 1080, P3D); //this should be large enough to house all the videos 
  opencv = new OpenCV(this, 160, 120);

  //keystone stuff 
  ks = new Keystone(this);
  //this can change 
  surfaceleft = ks.createCornerPinSurface(400, 300, 20);
  surfacecenter = ks.createCornerPinSurface(400, 300, 20);
  surfaceright = ks.createCornerPinSurface(400, 300, 20);
  // We need an offscreen buffer to draw the surface we
  // want projected
  // note that we're matching the resolution of the
  // CornerPinSurface.
  // (The offscreen buffer can be P2D or P3D)
  //P3D is telling processing to be in 3D mode
  //the number is 400, 300 is related to eachother they must be the same 
  offscreenleft = createGraphics(400, 300, P3D);
  offscreencenter = createGraphics(400, 300, P3D);
  offscreenright = createGraphics(400, 300, P3D);

  //webcam stuff 
  //this is turning the webcam on and to run
  //the numbers correlate to the println list 
  camleft = new Capture(this, Capture.list()[3] ); //LEFT CAM IS LOGITECH HD 
  //WEBCAM C310 
  camleft.start();

  camcenter = new Capture(this, Capture.list()[79] ); //CENTER CAM IS LOGITECH HD
  //WEBCAM C310-1 
  camcenter.start();

  camright = new Capture(this, Capture.list()[155] ); //RIGHT CAM IS LOGITECH HD 
  //WEBCAM C310-2
  camright.start();

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  // movie stuff 
  // load video
  myMovieleft = new Movie(this, "testvideo.mp4");

  // need to play, pause, loop 
  myMovieleft.play();
  myMovieleft.pause();
  myMovieleft.loop();

  myMoviecenter = new Movie(this, "testvideo-1.mp4");

  // need to play, pause, loop 
  myMoviecenter.play();
  myMoviecenter.pause();
  myMoviecenter.loop();

  myMovieright = new Movie(this, "testvideo-2.mp4");

  // need to play, pause, loop 
  myMovieright.play();
  myMovieright.pause();
  myMovieright.loop();
}
void captureEvent(Capture cam) {
  cam.read();
}

void draw() {

  // open cv detect faces
  opencv.loadImage(camleft);

  // load in faces as rectangles
  Rectangle[] facesleft = opencv.detect();

  // are there faces?
  if (facesleft.length &gt; 0) {
    // sees a face!
    myMovieleft.play();
  } else {
    // no face
    myMovieleft.pause();
  }

  // play video
  if (myMovieleft.available()) {
    myMovieleft.read();
    //offscreen.image
    image(myMovieleft, 0, 540);
  }

  opencv.loadImage(camcenter);

  // load in faces as rectangles
  Rectangle[] facescenter = opencv.detect();

  // are there faces?
  if (facescenter.length &gt; 0) {
    // sees a face!
    myMoviecenter.play();
  } else {
    // no face
    myMoviecenter.pause();
  }

  // play video
  if (myMoviecenter.available()) {
    myMoviecenter.read();
    image(myMoviecenter, 780, height/2);
  }

// THERE IS SOMETHING WRONG WITH THIS opencv.loadImage(camright);

//RIGHT CAM
  // load in faces as rectangles
  Rectangle[] facesright = opencv.detect();

  // are there faces?
  if (facesright.length &gt; 0) {
    // sees a face!
    myMovieright.play();
  } else {
    // no face
    myMovieright.pause();
  }

  // play video
  if (myMovieright.available()) {
    myMovieright.read();
    image(myMovieright, 1560, height/2);

    //keystone stuff
    surfaceleft.render(offscreenleft);
    surfacecenter.render(offscreencenter);
    surfaceright.render(offscreenright);
  }
}
//the save and load function for keystone 
void keyPressed() {
  switch(key) {
  case 'c':
    // enter/leave calibration mode, where surfaces can be warped 
    // and moved
    ks.toggleCalibration();
    break;

  case 'l':
    // loads the saved layout
    ks.load();
    break;

  case 's':
    // saves the layout
    ks.save();
    break;
  }
}
`
</code></pre>
]]></description>
   </item>
   <item>
      <title>looking to simulate a mouse click</title>
      <link>https://forum.processing.org/two/discussion/16034/looking-to-simulate-a-mouse-click</link>
      <pubDate>Fri, 15 Apr 2016 19:26:56 +0000</pubDate>
      <dc:creator>Jai</dc:creator>
      <guid isPermaLink="false">16034@/two/discussions</guid>
      <description><![CDATA[<p>im looking to click on the draw window when is open and click on a face but without using the mouse to do so, it be better if processing had a voice recognition library but since im yet to come across such sketch i like to simulate a mouse click with a given screen x&amp;y coord'n</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;

    Capture video;
    OpenCV opencv;


    void setup() {
      size(640, 480);

      video = new Capture(this, 640/2, 480/2, "360HD Hologen");
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

      video.start();
    }

    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0);
      strokeWeight(1);
      Rectangle[] faces = opencv.detect();
      println(faces.length);

      for (int i = 0; i &lt; faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }

    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>how to  combine two different codes together. webcam and ellipses</title>
      <link>https://forum.processing.org/two/discussion/1786/how-to-combine-two-different-codes-together-webcam-and-ellipses</link>
      <pubDate>Tue, 03 Dec 2013 00:23:08 +0000</pubDate>
      <dc:creator>smitty575s</dc:creator>
      <guid isPermaLink="false">1786@/two/discussions</guid>
      <description><![CDATA[<p>I am trying to make the the wave that I have follow my face on openCV face detector but i can't seem to rearrange it so it works. right now it works with just the wave following the mouse but how do I make it follow my face on my webcam instead of the mouse. how do i rearrange it so it works? any help would be very much appreciated! 
this is the code I have right now:</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
// creating the wave -&gt; (ellipses transform into wave as it grows and dies
class Wave {

  PVector loc;
  int farOut;

  Wave() {
    loc = new PVector();
    loc.x = mouseX;
    loc.y = mouseY;


    farOut = 1;

    //colour stroke on wave set on random so it flashes.
    stroke(random(255), random(255), random(255));
  }

  void update() {   
    farOut += 1;
  } 
  void display() {   
    ellipse(loc.x, loc.y, farOut, farOut);
  }

  boolean dead() {

    if(farOut &gt; 90) {      
      return true;
    }   
    return false;
  }
}
//array list of waves
ArrayList&lt;Wave&gt; waves = new ArrayList&lt;Wave&gt;();

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  //dectating just the face
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
 ellipseMode(CENTER);
  video.start();

}

void draw() {
  scale(2);
  opencv.loadImage(video);
  image(video, 0, 0 );

  fill(255, 255, 255, 50);
  noStroke();
  //face detector code with openCV
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    ellipse(faces[i].x, faces[i].y, 10, 10);

 Wave w = new Wave();
    //array list -&gt; creating new waves
    waves.add(w);
  }

  //run through each of the waves
  //wave size will grow
  for(int i = 0; i &lt; waves.size(); i ++) {
    //show waves - &gt; each wave updated and displayed
    waves.get(i).update();
    waves.get(i).display();

   //kill the wave 
    if(waves.get(i).dead()) {
      waves.remove(i);
    }
  }
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   </channel>
</rss>