<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with loadcascade() - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=loadcascade%28%29</link>
      <pubDate>Sun, 08 Aug 2021 19:07:35 +0000</pubDate>
         <description>Tagged with loadcascade() - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/taggedloadcascade%28%29/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>Using IP-Capture library with IP-Webcam app on android</title>
      <link>https://forum.processing.org/two/discussion/17702/using-ip-capture-library-with-ip-webcam-app-on-android</link>
      <pubDate>Sat, 30 Jul 2016 10:26:59 +0000</pubDate>
      <dc:creator>Rami94</dc:creator>
      <guid isPermaLink="false">17702@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>Im trying to do some image processing with the openCV library and im trying to connect my android phone's camera to processing through an app called IP-Webcam, but I cant get the IP-Capture library to connect to my phone, heres the code im using :</p>

<p>`
import gab.opencv.*;
import java.awt.*;
import ipcapture.*;</p>

<p>IPCapture cam;
OpenCV opencv;</p>

<p>void setup() {
 size(640, 480);
 cam = new IPCapture(this, "192.168.1.9:8080", "", ""); //local ip for the ipwebcam
 cam.start();
 opencv = new OpenCV(this, 640, 480);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);</p>

<p>}</p>

<p>void draw() {
  scale(2);
  //println(cam.isAvailable()); 
  if (cam.isAvailable()) {
    cam.read();
    image(cam,0,0);
  }</p>

<p>opencv.loadImage(cam);</p>

<p>noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);</p>

<p>for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}
`</p>

<p>The program doesnt find the camera, the cam.isAvailable() function returns false.</p>

<p>Thank you.</p>
]]></description>
   </item>
   <item>
      <title>Get biggest face from openCV</title>
      <link>https://forum.processing.org/two/discussion/25492/get-biggest-face-from-opencv</link>
      <pubDate>Sun, 10 Dec 2017 11:05:56 +0000</pubDate>
      <dc:creator>martinusbar</dc:creator>
      <guid isPermaLink="false">25492@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>This is my first post on this forum, and i'm in need of your help. I'm working on a project with an android that has a set of animatronic eyes and a facetracking feature. I'm using the opencv library and Processing 3.3.6 to execute the tracking, and an arduino t control the eyes. At the moment i've created a script where the eyes are following only ONE face, but sometimes the eyes 'jump' when a new face enters the webcam. I would like to avoid this, so my reasoning was to always get the biggest width of the faces detected and send the x and y coordinates to the arduino. I found similar questions on the forum to 'Get the largest element from an array' but although i understand the logic my sketch keeps on outputting all sets of x and y coordinates detected. A supplementary note is in place that i work heuristically with code and have very basic knowledge of programming languages. Any push in the right direction is highly appreciated. Below the processing code:</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    import processing.serial.*;

    Capture video;
    OpenCV opencv;
    Serial myPort;  // Create object from Serial class

    int newXpos, newYpos;
    //These variables hold the x and y location for the middle of the detected face
    int midFaceX = 0;
    int midFaceY = 0;

    void setup() {
      size(640, 480);
      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

     //println(Serial.list()); // List COM-ports (Use this to figure out which port the Arduino is connected to)
      String portName = Serial.list()[1];
      //select first com-port from the list (change the number in the [] if your sketch fails to connect to the Arduino)
      myPort = new Serial(this, portName, 19200);   //Baud rate is set to 19200 to match the Arduino baud rate.

      video.start();
    }


    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0, 40);
      strokeWeight(3);
      Rectangle[] faces = opencv.detect();

      int maxValueFace = 0;
      int maxIndex = -1;

      for (int i = 0; i &lt; faces.length; i++ ) {

        if (faces[i].width &gt; maxValueFace) {
          maxIndex = i;
          maxValueFace = faces[i].width;
          //println(maxValueFace);

        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); //
        midFaceX = faces[i].x + (faces[i].width/2); // middle of the face
        midFaceY = faces[i].y + (faces[i].height/2); // middle of the face
        float xpos = map(midFaceX, 0, width, 90, 120); //maps range of servos L-&gt;R
        float ypos = map(midFaceY, 0, height, 90, 120); //maps range of servos U-&gt;D
        int newXpos = (int)xpos; //converts position X float into integer
        int newYpos = (int)ypos; //converts position Y float into integer
        myPort.write(newXpos+"x"); // send X coordinate to Arduino
        myPort.write(newYpos+"y"); // send Y coordinate to Arduino
        println(midFaceX + "," + midFaceY);
        }
      }
    }

    void captureEvent(Capture c) {
      c.read();
    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to trigger an action with face detection?</title>
      <link>https://forum.processing.org/two/discussion/25482/how-to-trigger-an-action-with-face-detection</link>
      <pubDate>Sat, 09 Dec 2017 23:44:10 +0000</pubDate>
      <dc:creator>arnolds112</dc:creator>
      <guid isPermaLink="false">25482@/two/discussions</guid>
      <description><![CDATA[<p>Hello,
Is it possible to use face detection to trigger a video to play?
if webcam detects a face, video plays
im having trouble with achieving this</p>
]]></description>
   </item>
   <item>
      <title>Interactive image using Face detection (OpenCV)</title>
      <link>https://forum.processing.org/two/discussion/25470/interactive-image-using-face-detection-opencv</link>
      <pubDate>Sat, 09 Dec 2017 11:05:54 +0000</pubDate>
      <dc:creator>Pharaonn</dc:creator>
      <guid isPermaLink="false">25470@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>there is a program I would like to make with a face detector using OpenCV. So basically you would have an image A (could be a video) that when a face is close enough to the camera, slowly morphs into an image B. 
So far, I got to make the program change the images whether or not the face is recognized but now I would like to : 
1) tell the face detector to detect the face only when it's around 3feet (1meter) away so the image can change only in that moment.
2) make the "image changing" really smooth and progressive (maybe if the two images merge together using opacity or something ?)</p>

<p>I am new to Processing and even more to OpenCv that's why I would be so gload if someone has the solution or can help me !</p>

<p>I quoted my code if it helps… The images can be replaced (btw, they are scaled up when the program runs and I don't understand why… that's another problem but not the most important one)</p>

<blockquote class="Quote">
  <p>import gab.opencv.*; 
  import processing.video.*; 
  import java.awt.Rectangle;
   PImage image;
   PImage flou;
   PImage nette;
  Capture cam; 
  OpenCV opencv; 
  Rectangle[] faces;
   
  void setup() { 
    fullScreen(); 
    background (0, 0, 0); 
    cam = new Capture( this, 640, 480, 30); 
    cam.start(); 
    opencv = new OpenCV(this, cam.width, cam.height); 
    opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
    image=loadImage("image.jpg");
    nette=loadImage("nette.jpg");
    flou=loadImage("flou.jpg");
  }
   
  void draw() { 
    opencv.loadImage(cam); 
    faces = opencv.detect(); 
    image(cam, 0, 0); 
   
    if (faces!=null) { 
      for (int i=0; i&lt; faces.length; i++) { 
        image(nette,0,0);
        noFill(); 
        stroke(255, 255, 0); 
        strokeWeight(10);<br />
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    } 
    if (faces.length&lt;=0) { 
      textAlign(CENTER); 
      fill(255, 0, 0); 
      textSize(56); 
      println("no faces");
      image(flou,0,0);
      text("UNDETECTED", 200, 100);
    }
  }
   
  void captureEvent(Capture cam) { 
    cam.read();}</p>
</blockquote>

<p>Thank you so much !!</p>
]]></description>
   </item>
   <item>
      <title>Processing+OpenCV - FaceDetection on a Movie</title>
      <link>https://forum.processing.org/two/discussion/24824/processing-opencv-facedetection-on-a-movie</link>
      <pubDate>Tue, 31 Oct 2017 20:41:47 +0000</pubDate>
      <dc:creator>Jawah</dc:creator>
      <guid isPermaLink="false">24824@/two/discussions</guid>
      <description><![CDATA[<p>I am using Processing with the OpenCV Lib and wanted to rewrite the example Code from the creators Git so that instead of doing Face Detection on a Camera Capture I'll load a video (.mp4).</p>

<p>Link to the Git and the example Code (which is working): <a rel="nofollow" href="https://github.com/atduskgreg/opencv-processing/blob/master/examples/LiveCamTest/LiveCamTest.pde">Link</a></p>

<p>Here is my Sketch:</p>

<pre><code>import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;

OpenCV opencv;
Movie myMovie;
Rectangle[] faces;

void setup() {
  size(480, 270);

  myMovie = new Movie(this, "people3.mp4");
  myMovie.loop();
  opencv = new OpenCV(this, myMovie.width, myMovie.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}

void movieEvent(Movie myMovie) {
  myMovie.read();
}

void draw() {

  background(0);
  if (myMovie.available()) {    

    opencv.loadImage(myMovie);
    faces = opencv.detect();
    image(myMovie, 0, 0);

    if (faces != null) {
      for (int i = 0; i &lt; faces.length; i++) {
        strokeWeight(2);
        stroke(255, 0, 0);
        noFill();
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }
  }
}
</code></pre>

<p>What I'm getting is an
<strong>IndexOutOfBoundsException: Index: 3, Size: 0</strong>
at openCV.loadImage(myMovie) and I don't know why.</p>

<p>Appreciating any help! :-)</p>
]]></description>
   </item>
   <item>
      <title>OpenCV kill old Faces loop</title>
      <link>https://forum.processing.org/two/discussion/24820/opencv-kill-old-faces-loop</link>
      <pubDate>Tue, 31 Oct 2017 16:40:17 +0000</pubDate>
      <dc:creator>corbinyo</dc:creator>
      <guid isPermaLink="false">24820@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,
I am using openCV and face detection to control the transparency of images. The position of the detected face on the x axis controls transparency. What I would like to do is be able to ignore the other faces that get picked up by the webcam. Is there a way to get the value of face 1 and ignore face 2, face 3, face 4 etc., but upon face 1 being killed, make face 2 = face 1, face 3 = face 2 etc., and constantly only use the data from face 1 as the controlling parameter?</p>

<p>I have looked into the OpenCV example (Example: Whichface) and I can't seem to wrangle it to what i need it for.
Here is my existing code</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;

    Capture video;
    OpenCV opencv;

    PImage ed;
    PImage genie;
    int offset = 0;

    float easing = 0.05;

    void setup() {

    size(724,960);


      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
      video.start();
      ed = loadImage("ed.jpg");
      genie = loadImage("genie.jpg");

    }




    void draw() {

      scale(1);
      opencv.loadImage(video);

      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);

       Rectangle[] faces = opencv.detect();
      println(faces.length);




    for (int i = 0; i &lt; faces.length; i++) {



 tint(255, 230);  // Display at half opacity
image(genie, 0, 0);  // Display at full opacity


  int dx = (faces[i].x-genie.width/2) - offset;
  offset += dx * easing;
  tint(255, faces[i].x);  // Display at half opacity
  image(ed, 0, 0);



   }
} 





void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Face detection - OpenCV: If clauses to act when no face is detected - HELP PLEASE!</title>
      <link>https://forum.processing.org/two/discussion/24052/face-detection-opencv-if-clauses-to-act-when-no-face-is-detected-help-please</link>
      <pubDate>Thu, 07 Sep 2017 09:52:28 +0000</pubDate>
      <dc:creator>EmRod</dc:creator>
      <guid isPermaLink="false">24052@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,</p>

<p>I'm working on a face detection code that will put a square around your face when its detected, but will display text or an image when no face is detected on screen. Ideally I'd like it to respond after a couple of second of no detection but anything is better than nothing!</p>

<p>I currently have the face detection code working. But when trying to add an if clause for null faces make an action - it doesn't work when I run the code - any help is greatly appreciated.</p>

<p>Here is the current code:</p>

<p>import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;</p>

<p>Capture cam;
OpenCV opencv;
Rectangle[] faces;</p>

<p>void setup() {
  size(640, 480, P2D);
  background (0, 0, 0);
  cam = new Capture( this, 640, 480, 30);
  cam.start();
  opencv = new OpenCV(this, cam.width, cam.height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
}</p>

<p>void draw() {
  opencv.loadImage(cam);
  faces = opencv.detect();
  image(cam, 0, 0);
  if (faces!=null) {
    for (int i=0; i&lt; faces.length; i++) {
      noFill();
      stroke(255, 255, 0);
      strokeWeight(10);
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }
  if (faces == null) {
    textAlign(CENTER);
    fill(255, 0, 0);
    textSize(56);
    text("UNDETECTED", 100, 100);
  }
}</p>

<p>void captureEvent(Capture cam) {
  cam.read();
}</p>
]]></description>
   </item>
   <item>
      <title>Flip Webcam opencv</title>
      <link>https://forum.processing.org/two/discussion/23090/flip-webcam-opencv</link>
      <pubDate>Fri, 16 Jun 2017 14:51:25 +0000</pubDate>
      <dc:creator>Apoel_95</dc:creator>
      <guid isPermaLink="false">23090@/two/discussions</guid>
      <description><![CDATA[<p>Hi, as you can see from the code I was able to reverse the webcam, but how do I make the opencv with the webcam flipped?
I would like to do face detection on the flip webcam</p>

<p>`import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;</p>

<p>Capture cam;
OpenCV opencv;</p>

<p>void setup() {
  size(640, 480);</p>

<p>cam = new Capture(this, width, height, 30);
  opencv = new OpenCV(this, width, height);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);<br />
  cam.start();
}</p>

<p>void draw() {
  if (cam.available() == true) {
    cam.read();
  }</p>

<p>pushMatrix();
  scale(-1, 1);
  translate(-cam.width, 0);
  image(cam, 0, 0); 
  popMatrix();</p>

<p>opencv.loadImage(cam);
  Rectangle[] faces = opencv.detect();</p>

<p>noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}`</p>
]]></description>
   </item>
   <item>
      <title>Speeding up Open CV detect face</title>
      <link>https://forum.processing.org/two/discussion/21971/speeding-up-open-cv-detect-face</link>
      <pubDate>Thu, 13 Apr 2017 09:12:24 +0000</pubDate>
      <dc:creator>CharlesDesign</dc:creator>
      <guid isPermaLink="false">21971@/two/discussions</guid>
      <description><![CDATA[<p>Hi All,</p>

<p>I'm trying to implement a real time face detection using openCV from the IR image of the kinect, unfortunately it takes my sketch from 60fps down to 6fps. I am aware kinectPV2 does face detection but it's nowhere near as good as openCV.
Can someone suggest a solution? I've tried the multithreaded "your are einstein" sketch but I couldn't get it to run.</p>

<pre><code>import KinectPV2.*;
import gab.opencv.*;
import java.awt.Rectangle;

KinectPV2 kinect;

FaceData [] faceData;

OpenCV opencv;
Rectangle[] faces;

PImage img;

void setup() {
  size(1000, 500, P2D);

  opencv = new OpenCV(this, 512, 424);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  kinect = new KinectPV2(this);

  //for face detection base on the infrared Img
  kinect.enableInfraredImg(true);

  //enable face detection
  kinect.enableFaceDetection(true);

  kinect.enableDepthImg(true);

  kinect.init();
}

void draw() {
  background(0);
  img = kinect.getInfraredImage(); //512 424


  opencv.loadImage(img);
  faces = opencv.detect();



  image(img, 0, 0);
  image(kinect.getDepthImage(), img.width, 0);

  fill(255);
  text("frameRate "+frameRate, 50, 50);

  noFill();
  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Create an Image Mask On Top of a Live Feed</title>
      <link>https://forum.processing.org/two/discussion/18966/create-an-image-mask-on-top-of-a-live-feed</link>
      <pubDate>Thu, 10 Nov 2016 01:06:07 +0000</pubDate>
      <dc:creator>lbrez16</dc:creator>
      <guid isPermaLink="false">18966@/two/discussions</guid>
      <description><![CDATA[<p>Hey all!</p>

<p>So for this code I'm trying to create a code that will use a live webcam feed and face tracking to put an image on top of the tracked face. After this point, I want to press a key ('n' in the code I posted below) and have it switch to a different picture. Right now as a base code I have processing's "LiveFaceTracking" example in my code. Any help you guys could give me would be greatly appreciated!</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    PImage WD;
    PImage GJ;

    Capture video;
    OpenCV opencv;

    void setup() {
      size(640, 480);
      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
      video.start();

      loadImage("WD");
      loadImage("GJ");
    }

    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);
      Rectangle[] faces = opencv.detect();
      println(faces.length);

      for (int i = 0; i &lt; faces.length; i++) {
        println(faces[i].x + "," + faces[i].y);
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }

    void captureEvent(Capture c) {
      c.read();
    }

    void keyPressed(){
      if(key = 'n'){
        loadImage = WD;
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to track an image moving?</title>
      <link>https://forum.processing.org/two/discussion/20130/how-to-track-an-image-moving</link>
      <pubDate>Sat, 07 Jan 2017 15:45:23 +0000</pubDate>
      <dc:creator>Markx</dc:creator>
      <guid isPermaLink="false">20130@/two/discussions</guid>
      <description><![CDATA[<p>Hello i'm new to processing and i'm working on a little game. But i got a problem:</p>

<p>I don't know how to tell the position (x,y) of my image "train" (in draw section) in order to make my background ("fond"
) move .</p>

<p>Thanks in advance.</p>

<pre><code>import processing.sound.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
int score = 5;
int temps;//la valeur du chronomètre

boolean depart;//commencer le compte à rebours (au clic)

float cx, cy;
boolean alive = true;
float trainX;
float trainY;


Rectangle[] faces;
Capture cam;

float x;
float y;
float easing = 1;


PImage fond;
PImage train;
PImage smiley;

void setup() {
  size(1000, 1000);

 video = new Capture(this, 640, 480);
  opencv = new OpenCV(this, 640, 480);

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  video.start();
  faces = opencv.detect();

  fond = loadImage("Map.bmp");
  train = loadImage("train.gif");  
  smiley = loadImage("train.gif");

}

void draw() {
  scale (0.5);
  image(fond, x, y);
  scale(0.5);
  opencv.loadImage(video);
    imageMode(CENTER);

  // afficher l'image de la webcam 

  Rectangle[] faces = opencv.detect();
  for (int i = 0; i &lt; faces.length; i++) {
   float x = faces[i].x + faces[i].width / 2;
    float y = faces[i].y + faces[i].height / 2;
    image(smiley, x, y, 300, 300);

}

float targetX = xtrain;
  float dx = targetX - x;
  x += dx * easing;

  float targetY = ytrain;
  float dy = targetY - y;
  y += dy * easing;


  //texte compte à rebours

  fill(#585858);
  text("TEMPS RESTANT :", 1170, 30);
  fill(#FF0000);
  text(temps, 1310, 30);
}



void captureEvent(Capture c) {
  c.read();
}

//fonction compte à rebours  
void compte_rebours() {
  if (depart ==true) {
    if (temps == 0 ) {
      temps = 0;
    }
    else {
      temps = 60 - millis()/1000; //compte à rebours à partir de 60
    }
  }
}  
</code></pre>
]]></description>
   </item>
   <item>
      <title>How can I let my face grow?</title>
      <link>https://forum.processing.org/two/discussion/19452/how-can-i-let-my-face-grow</link>
      <pubDate>Thu, 01 Dec 2016 17:17:30 +0000</pubDate>
      <dc:creator>Adinda123</dc:creator>
      <guid isPermaLink="false">19452@/two/discussions</guid>
      <description><![CDATA[<p><strong>I am doing a project about obesitas.</strong></p>

<p>I am using my webcam, then I used the face tracker. I copied my face into a rectangle.
What I want is my face (rectangle) is getting bigger and bigger within 2/3 minutes.</p>

<p><em>I hope somebody can help me with that! :)</em></p>

<p>Below is my code:</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
PImage cam;

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();
}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);


  for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    copy(cam, faces[i].x, faces[i].y, faces[i].width, faces[i].height, faces[i].x - faces[i].width/2, faces[i].y, faces[i].width * 2,             
faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
  cam = c.get();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to use ultrasonic sensor to trigger a video ? (Arduino+processing)</title>
      <link>https://forum.processing.org/two/discussion/19293/how-to-use-ultrasonic-sensor-to-trigger-a-video-arduino-processing</link>
      <pubDate>Fri, 25 Nov 2016 18:17:22 +0000</pubDate>
      <dc:creator>eliseber</dc:creator>
      <guid isPermaLink="false">19293@/two/discussions</guid>
      <description><![CDATA[<p>Hey ! 
First time I'm using Arduino and Processing at the same time and I'm completely lost ! 
I'm designing an interactive installation, and I need the trigger the position in a video with the distance : as we get closer to the sensor, the video will move. 
I've set up a code for Arduino which gives the distance. 
For processing, I found a code which triggers the position with the webcam (and the size of the head of the spectator, as he gets closer, his head become bigger, and the position moves forward). Do you have any idea/ thoughts about how I can modify this code ? It's the same idea, with arduino instead of the webcam but I don't know where to start...</p>

<h2>Thank you very much !</h2>

<p>Arduino :</p>

<pre><code>#include &lt;NewPing.h&gt;
#include &lt;Servo.h&gt;


#define TRIGGER_PIN  12  // Arduino pin tied to trigger pin on the ultrasonic sensor.
#define ECHO_PIN     11  // Arduino pin tied to echo pin on the ultrasonic sensor.
#define MAX_DISTANCE 200 // Maximum distance we want to ping for (in centimeters). Maximum sensor distance is rated at 400-500cm.

int LEDpin = 13;
Servo myservo;
int val;

NewPing sonar(TRIGGER_PIN, ECHO_PIN, MAX_DISTANCE); // NewPing setup of pins and maximum distance.

void setup() {
  Serial.begin(9600); // Open serial monitor at 115200 baud to see ping results.
  pinMode(LEDpin, OUTPUT);
  myservo.attach(9);// attaches servo to pin 9

}

void loop() {
  delay(400);                      // Wait 50ms between pings (about 20 pings/sec). 29ms should be the shortest delay between pings.
  unsigned int uS = sonar.ping(); // Send ping, get ping time in microseconds (uS).
  //Serial.print("Ping: ");
  Serial.println(uS / US_ROUNDTRIP_CM); // Convert ping time to distance in cm and print result (0 = outside set distance range)
  //Serial.println("cm");
  //if(uS / US_ROUNDTRIP_CM &lt; 100) {digitalWrite(LEDpin, HIGH);}
  //else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
 // else if (uS / US_ROUNDTRIP_CM &gt; 100) {digitalWrite(LEDpin, LOW);}
  //delay (200);

  val = (uS / US_ROUNDTRIP_CM);
  val = map(val, 0, 172, 15, 180);
  myservo.write(val);
  delay (150);


}
</code></pre>

<hr />

<p>PROCESSING</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
Capture video;
OpenCV opencv;
//---
Movie monSuperFilm;// déclaration du film
float positionDuFilmEnSecondes;//position ds le film
Integer surfaceVisages,surfaceMin,surfaceMax;
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void setup() {
  //----------------
  size(1080, 720);//dimensions de votre film en pixels; mettre la définition de votre film
  //-----------------
  // partie video cam
  //-----------------
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  video.start();
  //-----------------
  // partie film
  //-----------------
    monSuperFilm = new Movie(this,"pattern1080-6low-1.mov");// chargement du film ; mettre le nom de votre film qui est dans le dossier data
    monSuperFilm.loop();
  //-----------------
  // partie visages
  //-----------------
  // plus la surface du visage est grande dans l'image retournée par la caméra du mac et plus on est près
  // le programme cumule les surfaces de tous les visages capturés par la caméra; à essayer à plusieurs devant le mac !
    surfaceMin=400;// min 0 //surface du visage sur la camera qui correspond au début du film
    surfaceMax=1500;// max 640x480=307200 //surface du visage sur la camera qui correspond à la fin du film
}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void draw() {
  //background(255);
  //scale(2);
  opencv.loadImage(video);

  //image(video, 0, 0 );

  //noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);
//---------------------------------------------
// calcul de la surface occupée par les visages
//---------------------------------------------
surfaceVisages=0;
  for (int i = 0; i &lt; faces.length; i++) {
    //println(faces[i].x + "," + faces[i].y);
    //rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    surfaceVisages=surfaceVisages+faces[i].width*faces[i].height;
  }
  fill(0);
  text(str(surfaceMin)+" / "+str(surfaceVisages)+" / "+str(surfaceMax),10,50);
   monSuperFilm.read();//on lit l'image du film 
  positionDuFilmEnSecondes=map(surfaceVisages,surfaceMin,surfaceMax,0,monSuperFilm.duration()); 
  monSuperFilm.jump(positionDuFilmEnSecondes);// on se déplace dans le film
  image(monSuperFilm, 0, 0);//on affiche l'image courante du film

}
//-------------------------------------------------
/*


*/
//-------------------------------------------------
void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Creating Pointillism effect on an image</title>
      <link>https://forum.processing.org/two/discussion/19181/creating-pointillism-effect-on-an-image</link>
      <pubDate>Sun, 20 Nov 2016 20:21:11 +0000</pubDate>
      <dc:creator>huimc</dc:creator>
      <guid isPermaLink="false">19181@/two/discussions</guid>
      <description><![CDATA[<p>Hello, I am currently working on a project in Processing to develop an interface by using face detection to view and interact with an image. When users entered to modify mode, the image will modify the image by adding pointillism effect on it with the face region. The small circle should be retained when the position indicator moves away. I am currently testing the function by using mouse movement. However, the small circle didn't retain and it seems like they were covered by the original image.</p>

<p>Any suggestion? I can't see what went wrong since I didn't place the image after drawing the circle.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

PImage img; 
PImage imgBlur, imgBefore, imgPoint, imgPointAfter;
Capture cam;
OpenCV opencv;
OpenCV imgOpenCV;

int camWidth = 320;
int camHeight = 240;

int mode = 0; //0: default; 1: view; 2: modify; 3: replace;

void setup() {

  //hint(DISABLE_DEPTH_SORT);
  size(1000, 647);
  cam = new Capture(this, 320, 240);  
  cam.start();
  opencv = new OpenCV(this,320,240);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  imageMode(CENTER);
  img = loadImage("selfies.jpg");  // Load the image into the program
  imgPoint = img.get(0,0,width,height);
  imgBlur = loadImage("selfies.jpg");
  imgBlur.filter(BLUR, 5);


}

void draw() {

  if (cam.available()) {

    image(img, img.width/2, img.height/2);  // Displays the image at its actual size at point (0,0)
    cam.read();  
    opencv.loadImage(cam);  
    image(cam, width - cam.width/2, height - cam.height/2);

    Rectangle[] faces = opencv.detect();
    noFill();
    stroke(0,255,0);
    strokeWeight(3);

    if (faces.length&gt;0) {

        if (mode == 1) {

         image(imgBlur, 0, 0);
         imgBefore = img.get(faces[0].x, faces[0].y, faces[0].width, faces[0].height);
         image(imgBefore,faces[0].x,faces[0].y);

        } else if ( mode == 2) {

          noStroke();
          loadPixels();
          imgPointAfter = imgPoint.get(mouseX,mouseY,100,100);
          int x = (int)random(mouseX,mouseX+imgPointAfter.width);
          int y = (int)random(mouseY,mouseY+imgPointAfter.height);
          int i = x + imgPoint.width*y;
          color c = pixels[i];
          fill(c);
          ellipse(x, y, 17,17);
        }

        for( int i=0; i&lt;faces.length; i++) {      

          rect(width - camWidth + faces[i].x, height - camHeight + faces[i].y, faces[i].width, faces[i].height); // cam face detect
          rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // position indicator

          }

        } else if (mode == 1) {

          image(imgBlur, 0, 0);
      }
  }
}

void keyPressed(){
  switch (key) {
    case 'v':
      mode = 1;
      break;
    case 'm':
      mode = 2;
      break;
    case 'r':
      mode = 3;
      break;
    case 'e':
      mode = 0;
      break;
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>why is my program running on the wrong side of screen?</title>
      <link>https://forum.processing.org/two/discussion/18926/why-is-my-program-running-on-the-wrong-side-of-screen</link>
      <pubDate>Mon, 07 Nov 2016 19:02:20 +0000</pubDate>
      <dc:creator>Jai</dc:creator>
      <guid isPermaLink="false">18926@/two/discussions</guid>
      <description><![CDATA[<p>what im looking to do is run face detection on the right side of the window only not display the cam on the right side while processing on the left side which is whats happening.</p>

<pre><code>void setup() {
  size(1280, 480, P3D);

  video2 = new Capture(this, 1280/2, 480, "webcam");

  opencv = new OpenCV(this, 1280/2, 480);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video2.start();
}

void R()
{
  scale(1);
  opencv.loadImage(video2);
  image(video2, 1280/2, 0);

  noFill();
  stroke(0, 255, 0);
  strokeWeight(1);
  Rectangle[] faces = opencv.detect();

  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
}
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>opencv problem with bootcamp</title>
      <link>https://forum.processing.org/two/discussion/18536/opencv-problem-with-bootcamp</link>
      <pubDate>Thu, 13 Oct 2016 22:27:38 +0000</pubDate>
      <dc:creator>jayson</dc:creator>
      <guid isPermaLink="false">18536@/two/discussions</guid>
      <description><![CDATA[<p>I'm trying to do face detection on a video with opencv on bootcamp (windows 10) and processing 2.2.1</p>

<blockquote class="Quote">
  <p>import processing.video.*;</p>
  
  <p>Movie video;
  import gab.opencv.*;
  import java.awt.Rectangle;</p>
  
  <p>OpenCV opencv;
  Rectangle[] faces;</p>
  
  <p>void setup()
  {
    size(640,360);</p>
  
  <p>video = new Movie(this, "people.mp4");
    video.play();
    video.loop();
  <br />
     opencv = new OpenCV(this, video.width, video.height);
     opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  }</p>
  
  <p>void draw()
  {
    background(0);
    fill(255);
  <br />
    image(video,0,0);
  <br />
    opencv.loadImage(video);
    faces = opencv.detect();
  <br />
  }</p>
  
  <p>void movieEvent(Movie m)
  {
      m.read();
  <br />
  }</p>
</blockquote>

<p>It crashes at the line: 
  opencv.loadImage(video);</p>

<p>With the message: IndexOutOfBoundsException: Index 3: Size 0</p>

<p>Any ideas? This works fine on mac osx with the same version of processing.</p>
]]></description>
   </item>
   <item>
      <title>OpenCV - How can I ad a different random image on every face?</title>
      <link>https://forum.processing.org/two/discussion/18457/opencv-how-can-i-ad-a-different-random-image-on-every-face</link>
      <pubDate>Sat, 08 Oct 2016 10:06:13 +0000</pubDate>
      <dc:creator>kiwiwi</dc:creator>
      <guid isPermaLink="false">18457@/two/discussions</guid>
      <description><![CDATA[<p>I want to write a Code, where every face gets another random image out of 10 images I have.
I struggle to separate the different faces. Every time I try, the same image appears on all faces.
Would be great, if someone can give me a hint how to separate the faces for the Code.</p>

<p>You will see, I was a bit desperate. I see the mistake, its a big one I know. But I can't see the solution.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
 int num = 3;
 PImage[] myImageArray = new PImage[num];
 Capture video;
 OpenCV opencv;

 void setup() {

for (int i=0; i&lt;myImageArray.length; i++){             
   myImageArray[i] = loadImage( str(i) + ".png");
 }

  size(800, 600);
   video = new Capture(this, 800/2, 600/2);
   opencv = new OpenCV(this, 800/2, 600/2);
   opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

   video.start();
 }

 void draw() {
   scale(2);
  opencv.loadImage(video);
   image(video, 0, 0 );

  Rectangle[] faces = opencv.detect();
   println(faces.length);                    //print number of faces
   for (int i = 0; i &lt; faces.length; i++) {
     println(faces[i].x + "," + faces[i].y); //print position(x/y) of faces 
  image(myImageArray[(int)random(num)], faces[i].x-70, faces[i].y-60, faces[i].width+80, faces[i].height+80);
 }
}

 void captureEvent(Capture c) {
   c.read();
 }
</code></pre>
]]></description>
   </item>
   <item>
      <title>Integrate these two codes?</title>
      <link>https://forum.processing.org/two/discussion/16589/integrate-these-two-codes</link>
      <pubDate>Sat, 14 May 2016 01:30:19 +0000</pubDate>
      <dc:creator>ellewin</dc:creator>
      <guid isPermaLink="false">16589@/two/discussions</guid>
      <description><![CDATA[<p>Hello Processing Forum folks,</p>

<p>I am working on a project that uses 3 webcams that when looked at, will play a video. I am thinking of the screens as entities that need to be acknowledged before they communicate with someone.</p>

<p>Everything was going dandy until I ran into two hiccups.</p>

<p>One is that opencv.loadImage(camright); seems to the culprit of this error "width(0) and height (0) cannot be &lt;=0" which doesn't make any sense to me because there are opencv.loadImage(camleft); and opencv.loadImage(camcenter); before it and they don't seem to be returning the same issue.</p>

<p>The second hiccup is that I am trying to use keystone so that I can projection map these videos to hanging plexi... I just can't seem to figure out how to get the videos onto the keystone 'mesh'.  Did that make sense?</p>

<p>Anyways, I am very green (taking my first class) and any help would be appreciated immensely. Especially since this project is due next week...</p>

<p>Below is my code... (I really couldn't get the whole code thing to work- I tried- I am sorry-if you want to help me with that too that'd be nice)</p>

<pre><code>//LIBRARIES 
import deadpixel.keystone.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*; 
import java.awt.Rectangle;

//keystone stuff 
Keystone ks;
CornerPinSurface surfaceleft;
CornerPinSurface surfacecenter;
CornerPinSurface surfaceright;
PGraphics offscreenleft;
PGraphics offscreencenter;
PGraphics offscreenright;

// movie object to play and pause later
//there will be three videos playing... 
Movie myMovieleft;
Movie myMoviecenter;
Movie myMovieright;

OpenCV opencv;

//<a href="https://processing.org/discourse/beta/num_1221233526.html" target="_blank" rel="nofollow">https://processing.org/discourse/beta/num_1221233526.html</a>
//<a href="https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams" target="_blank" rel="nofollow">https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams</a>
Capture camleft;
Capture camcenter;
Capture camright;

String[] captureDevices;  

void setup() {
  //this will println listing the webcams you need to but the number on the left in the [] 
  //to make them work 
  printArray(Capture.list());
  background(0);
  size(2640, 1080, P3D); //this should be large enough to house all the videos 
  opencv = new OpenCV(this, 160, 120);

  //keystone stuff 
  ks = new Keystone(this);
  //this can change 
  surfaceleft = ks.createCornerPinSurface(400, 300, 20);
  surfacecenter = ks.createCornerPinSurface(400, 300, 20);
  surfaceright = ks.createCornerPinSurface(400, 300, 20);
  // We need an offscreen buffer to draw the surface we
  // want projected
  // note that we're matching the resolution of the
  // CornerPinSurface.
  // (The offscreen buffer can be P2D or P3D)
  //P3D is telling processing to be in 3D mode
  //the number is 400, 300 is related to eachother they must be the same 
  offscreenleft = createGraphics(400, 300, P3D);
  offscreencenter = createGraphics(400, 300, P3D);
  offscreenright = createGraphics(400, 300, P3D);

  //webcam stuff 
  //this is turning the webcam on and to run
  //the numbers correlate to the println list 
  camleft = new Capture(this, Capture.list()[3] ); //LEFT CAM IS LOGITECH HD 
  //WEBCAM C310 
  camleft.start();

  camcenter = new Capture(this, Capture.list()[79] ); //CENTER CAM IS LOGITECH HD
  //WEBCAM C310-1 
  camcenter.start();

  camright = new Capture(this, Capture.list()[155] ); //RIGHT CAM IS LOGITECH HD 
  //WEBCAM C310-2
  camright.start();

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  // movie stuff 
  // load video
  myMovieleft = new Movie(this, "testvideo.mp4");

  // need to play, pause, loop 
  myMovieleft.play();
  myMovieleft.pause();
  myMovieleft.loop();

  myMoviecenter = new Movie(this, "testvideo-1.mp4");

  // need to play, pause, loop 
  myMoviecenter.play();
  myMoviecenter.pause();
  myMoviecenter.loop();

  myMovieright = new Movie(this, "testvideo-2.mp4");

  // need to play, pause, loop 
  myMovieright.play();
  myMovieright.pause();
  myMovieright.loop();
}
void captureEvent(Capture cam) {
  cam.read();
}

void draw() {

  // open cv detect faces
  opencv.loadImage(camleft);

  // load in faces as rectangles
  Rectangle[] facesleft = opencv.detect();

  // are there faces?
  if (facesleft.length &gt; 0) {
    // sees a face!
    myMovieleft.play();
  } else {
    // no face
    myMovieleft.pause();
  }

  // play video
  if (myMovieleft.available()) {
    myMovieleft.read();
    //offscreen.image
    image(myMovieleft, 0, 540);
  }

  opencv.loadImage(camcenter);

  // load in faces as rectangles
  Rectangle[] facescenter = opencv.detect();

  // are there faces?
  if (facescenter.length &gt; 0) {
    // sees a face!
    myMoviecenter.play();
  } else {
    // no face
    myMoviecenter.pause();
  }

  // play video
  if (myMoviecenter.available()) {
    myMoviecenter.read();
    image(myMoviecenter, 780, height/2);
  }

// THERE IS SOMETHING WRONG WITH THIS opencv.loadImage(camright);

//RIGHT CAM
  // load in faces as rectangles
  Rectangle[] facesright = opencv.detect();

  // are there faces?
  if (facesright.length &gt; 0) {
    // sees a face!
    myMovieright.play();
  } else {
    // no face
    myMovieright.pause();
  }

  // play video
  if (myMovieright.available()) {
    myMovieright.read();
    image(myMovieright, 1560, height/2);

    //keystone stuff
    surfaceleft.render(offscreenleft);
    surfacecenter.render(offscreencenter);
    surfaceright.render(offscreenright);
  }
}
//the save and load function for keystone 
void keyPressed() {
  switch(key) {
  case 'c':
    // enter/leave calibration mode, where surfaces can be warped 
    // and moved
    ks.toggleCalibration();
    break;

  case 'l':
    // loads the saved layout
    ks.load();
    break;

  case 's':
    // saves the layout
    ks.save();
    break;
  }
}
`
</code></pre>
]]></description>
   </item>
   <item>
      <title>looking to simulate a mouse click</title>
      <link>https://forum.processing.org/two/discussion/16034/looking-to-simulate-a-mouse-click</link>
      <pubDate>Fri, 15 Apr 2016 19:26:56 +0000</pubDate>
      <dc:creator>Jai</dc:creator>
      <guid isPermaLink="false">16034@/two/discussions</guid>
      <description><![CDATA[<p>im looking to click on the draw window when is open and click on a face but without using the mouse to do so, it be better if processing had a voice recognition library but since im yet to come across such sketch i like to simulate a mouse click with a given screen x&amp;y coord'n</p>

<pre><code>    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;

    Capture video;
    OpenCV opencv;


    void setup() {
      size(640, 480);

      video = new Capture(this, 640/2, 480/2, "360HD Hologen");
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

      video.start();
    }

    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0);
      strokeWeight(1);
      Rectangle[] faces = opencv.detect();
      println(faces.length);

      for (int i = 0; i &lt; faces.length; i++) {
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }

    }
</code></pre>
]]></description>
   </item>
   <item>
      <title>how to  combine two different codes together. webcam and ellipses</title>
      <link>https://forum.processing.org/two/discussion/1786/how-to-combine-two-different-codes-together-webcam-and-ellipses</link>
      <pubDate>Tue, 03 Dec 2013 00:23:08 +0000</pubDate>
      <dc:creator>smitty575s</dc:creator>
      <guid isPermaLink="false">1786@/two/discussions</guid>
      <description><![CDATA[<p>I am trying to make the the wave that I have follow my face on openCV face detector but i can't seem to rearrange it so it works. right now it works with just the wave following the mouse but how do I make it follow my face on my webcam instead of the mouse. how do i rearrange it so it works? any help would be very much appreciated! 
this is the code I have right now:</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;
// creating the wave -&gt; (ellipses transform into wave as it grows and dies
class Wave {

  PVector loc;
  int farOut;

  Wave() {
    loc = new PVector();
    loc.x = mouseX;
    loc.y = mouseY;


    farOut = 1;

    //colour stroke on wave set on random so it flashes.
    stroke(random(255), random(255), random(255));
  }

  void update() {   
    farOut += 1;
  } 
  void display() {   
    ellipse(loc.x, loc.y, farOut, farOut);
  }

  boolean dead() {

    if(farOut &gt; 90) {      
      return true;
    }   
    return false;
  }
}
//array list of waves
ArrayList&lt;Wave&gt; waves = new ArrayList&lt;Wave&gt;();

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  //dectating just the face
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
 ellipseMode(CENTER);
  video.start();

}

void draw() {
  scale(2);
  opencv.loadImage(video);
  image(video, 0, 0 );

  fill(255, 255, 255, 50);
  noStroke();
  //face detector code with openCV
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i &lt; faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    ellipse(faces[i].x, faces[i].y, 10, 10);

 Wave w = new Wave();
    //array list -&gt; creating new waves
    waves.add(w);
  }

  //run through each of the waves
  //wave size will grow
  for(int i = 0; i &lt; waves.size(); i ++) {
    //show waves - &gt; each wave updated and displayed
    waves.get(i).update();
    waves.get(i).display();

   //kill the wave 
    if(waves.get(i).dead()) {
      waves.remove(i);
    }
  }
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>looking for a way to read if a box/rectangle is small or big on the screen</title>
      <link>https://forum.processing.org/two/discussion/16012/looking-for-a-way-to-read-if-a-box-rectangle-is-small-or-big-on-the-screen</link>
      <pubDate>Thu, 14 Apr 2016 01:43:40 +0000</pubDate>
      <dc:creator>Jai</dc:creator>
      <guid isPermaLink="false">16012@/two/discussions</guid>
      <description><![CDATA[<p>hello there i have this sketch that tells me if they are faces on the screen and depending on how close or far you are to the cam then the rectangle would change its size, my question is how can i tell if the box is getting bigger or smaller WITHOUT ME actually being able to look at the screen?</p>

<p>what i really want to do is write to the port a string that on the other side i can have a midi sd card read wav/midi files pertaining to the size of the rectangle</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import processing.serial.*;           

Capture video;
OpenCV opencv;

Serial port;

void setup() {
  size(640, 480);
  port = new Serial(this, Serial.list()[0], 19200);

  video = new Capture(this, 640/2, 480/2, "360HD Hologen");
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();
}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(1);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to scale webcamera view when face is detected?</title>
      <link>https://forum.processing.org/two/discussion/15964/how-to-scale-webcamera-view-when-face-is-detected</link>
      <pubDate>Mon, 11 Apr 2016 15:06:08 +0000</pubDate>
      <dc:creator>Chris78</dc:creator>
      <guid isPermaLink="false">15964@/two/discussions</guid>
      <description><![CDATA[<p>Webcamera through Processing detects the face, when arduino proximity sensor is activated and put a rectangle around the face, what I would like to do is to make the whole view or just what is inside the rectangle bigger (scale it twice) or rotate or change the collor, when the face is detected. How should I amend my code? Many thanks
This is my processing code:</p>

<pre><code>import processing.serial.*;
Serial myPort;

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

boolean playVid = false;

void setup() {
  size(640, 480);

  myPort = new Serial(this, Serial.list()[1], 9600);
  myPort.bufferUntil('\n');

  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 

  video.start();
}

void draw() {
  if (myPort.available() &gt; 0) {
    String unoMessage = myPort.readStringUntil('@');

    if (unoMessage != null) { 
      if (unoMessage.equals("StartCamera@")) {
        playVid = true;
        println(unoMessage);
      }
    }
  }

  if (playVid) {
    scale(2);
    opencv.loadImage(video);

    image(video, 0, 0 );

    noFill();
    stroke(0, 255, 0);
    strokeWeight(3);
    Rectangle[] faces = opencv.detect();
    println(faces.length);

    for (int i = 0; i &lt; faces.length; i++) {
      println(faces[i].x + "," + faces[i].y);
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }
}

void captureEvent(Capture c) {
  c.read();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Opening saved frames from camera to appear next to live camera on same screen</title>
      <link>https://forum.processing.org/two/discussion/14322/opening-saved-frames-from-camera-to-appear-next-to-live-camera-on-same-screen</link>
      <pubDate>Thu, 07 Jan 2016 15:58:46 +0000</pubDate>
      <dc:creator>smith</dc:creator>
      <guid isPermaLink="false">14322@/two/discussions</guid>
      <description><![CDATA[<p>Hi, i've been working on a script which uses OpenCV for processing and face recognition. I've tried to set it up so that when it recognises a face (and draws a rectangle) it also saves the frame to the data folder. I have the code set up to have the camera on the left of the screen, but is there a way I can load this image so that it appears on the right of the camera image each time a frame is saved?</p>

<p>I've tried below (lines 46 to 48) but each time I get this error "The file "data/Capture-####.png" is missing or inaccessible, make sure the URL is valid or that the file has been added to your sketch and is readable." The code runs until it recognises a face and then freezes and i'm not sure how I can edit the code so it works.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

PImage savedframe;

void setup() {
  size(1280, 480);
  background(0);
  video = new Capture(this, width/4, height/2);
  opencv = new OpenCV(this, width/4, height/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  

  video.start();

}


//ANIMATION
void draw() 
{

    scale(2);
    opencv.loadImage(video);
    video.loadPixels();
    image(video,0,0,width/4,height/2);


      noFill();
      stroke(255, 0, 0);
      strokeWeight(3);
      line(width/4,0,width/4,height);
      Rectangle[] faces = opencv.detect();
      println(faces.length);

      for (int i = 0; i &lt; faces.length; i++) 
      {
        println(faces[i].x + "," + faces[i].y);
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

          if (faces.length &gt; 0)
         {
          saveFrame("data/Capture-####.png");                           //Saves frame to data file
          savedframe=loadImage("data/Capture-####.png");       //load saved frame from data file
          image(savedframe,width/4,0,width/2,height/2);            //display saved frame next to camera
         }

      }          
}

void captureEvent(Capture c) 
{
 c.read();
}
</code></pre>

<p>Thanks.</p>
]]></description>
   </item>
   <item>
      <title>Capture with Macbook Air internal Cam</title>
      <link>https://forum.processing.org/two/discussion/13766/capture-with-macbook-air-internal-cam</link>
      <pubDate>Thu, 03 Dec 2015 20:37:53 +0000</pubDate>
      <dc:creator>StellaYu</dc:creator>
      <guid isPermaLink="false">13766@/two/discussions</guid>
      <description><![CDATA[<p>Dear all.</p>

<p>I am trying to resize my cam size to 640x480 with Macbook Air internal Cam. My cam size and and OpenCV size are 320x240 now, and when I try to change to 640x480 I am keep having trouble..</p>

<p>here's my cam list..
<img src="https://forum.processing.org/two/uploads/imageupload/018/SPRFQR6H3HD3.png" alt="스크린샷 2015-12-04 오전 5.27.00" title="스크린샷 2015-12-04 오전 5.27.00" /></p>

<p>When I am using 'Capture.list()[3]' and size 320x240 it works fine.. but when I change to 'Capture.list()[0]' and size 640x480 it is still working but really slow.. I just guess because of Rectangle but still I don't know why..</p>

<p>here's my code..</p>

<p>import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;</p>

<p>OpenCV openCV; 
Capture cam;
Rectangle[] faces;</p>

<p>void setup() {
  size(640, 480);</p>

<p>cam = new Capture(this, Capture.list()[0]);
  cam.start();</p>

<p>openCV = new OpenCV(this, 640, 480);
  openCV.loadCascade(OpenCV.CASCADE_FRONTALFACE);</p>

<p>noFill();
  stroke(0,255,0);
}</p>

<p>void draw() {
  if (cam.available()) {
    cam.read();
    image(cam,0,0);</p>

<pre><code>openCV.loadImage(cam);
faces = openCV.detect();
for (int i=0; i&lt;faces.length; i++) {
  rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
}
</code></pre>

<p>}
}</p>

<p>Thank you so much.
Stella.</p>
]]></description>
   </item>
   <item>
      <title>Batch Loading Images and Cropping</title>
      <link>https://forum.processing.org/two/discussion/9385/batch-loading-images-and-cropping</link>
      <pubDate>Wed, 11 Feb 2015 22:01:43 +0000</pubDate>
      <dc:creator>blkrush</dc:creator>
      <guid isPermaLink="false">9385@/two/discussions</guid>
      <description><![CDATA[<p>Hello, I recently started working with processing and I have two questions that I really need help with.</p>

<p>The two things I need to accomplish are this:</p>

<ol>
<li>Load a folder of 1000 jpgs into processing. </li>
<li>Spit out cropped images of their faces. </li>
</ol>

<p>Currently I having trouble saving just what I copy. I only seem to save the entire window. <img src="http://s11.postimg.org/w7eypnsar/crop.jpg" alt="" /></p>

<pre><code>Rectangle[] faces;
int filenumber = 0;
int croppedimage = 0;

void setup() {
  opencv = new OpenCV(this, "test2.jpg");

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  
  faces = opencv.detect();

  size(opencv.width, opencv.height);


}

void draw() {

  for (int i = 0; i &lt; faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    copy(opencv.getInput(),faces[i].x, faces[i].y, faces[i].width, faces[i].height, 0,0, faces[i].width, faces[i].height);

  if (filenumber &lt; 3) {
    line(filenumber, 0, filenumber, 100);
    filenumber = filenumber + 1;
    } else {
    noLoop();
  }
  //croppedimage = get(opencv.width*2,opencv.width*2, opencv.height, opencv.height);
  //image(croppedimage,0,0);
  saveFrame("cropped" + filenumber); 
}
  }
</code></pre>
]]></description>
   </item>
   <item>
      <title>How do I get an image to persist for a given time in a loop?</title>
      <link>https://forum.processing.org/two/discussion/9375/how-do-i-get-an-image-to-persist-for-a-given-time-in-a-loop</link>
      <pubDate>Wed, 11 Feb 2015 11:44:25 +0000</pubDate>
      <dc:creator>samuset</dc:creator>
      <guid isPermaLink="false">9375@/two/discussions</guid>
      <description><![CDATA[<p>Hi, what I'm trying to do here is have the image(faceImage, 0, 0) stay on for longer than a fraction of a second. I know I need to use some kind of timer, but all my previous attempts with an if timer or a while() have failed so far. Thanks in advance.</p>

<pre><code>import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV faceCV;
PImage faceImage;

int faceCounter = -1;

int offset = 40;

void setup() {
  size(640, 480);
  video = new Capture(this, width/divider, height/divider);
  faceCV = new OpenCV(this, width/divider, height/divider);
  faceCV.loadCascade("haarcascade_frontalface_default.xml");
  video.start();
}

void draw() {
  if (video.available()) {
    video.read();
    faceCV.loadImage(video);
  }

  pushMatrix();
  image(video, 0, 0);
  popMatrix();

  Rectangle[] faces = faceCV.detect();

  for (int i = 0; i &lt; faces.length; i++) {
    if (faces[i].width &gt; 120) {
      faceCounter++;
      video.save("data/memory"+faceCounter+".png");
      if (faceCounter &gt; offset-1 &amp;&amp; faceCounter%offset  == 0) {
        faceImage = loadImage("memory"+(faceCounter-offset)+".png");
        pushMatrix();
        image(faceImage, 0, 0);
        popMatrix();
      }
    }
  }
}
</code></pre>
]]></description>
   </item>
   </channel>
</rss>