Hey guys, I've just started experimenting with OpenCV and Processing and came up with a project that I want to explore. Basically it's a polygonal representation of a blob. Right now everything is working fine but the blob I'm tracking is way to finicky to create a consistent mass. Is there a way I can optimize it via frame averaging, or is there something in the way I'm identifying blobs that needs to change Here's the code and a pic of the footage I'm using to track. Sorry but my isight is dead so I have to use pre-captured footage.
http://www.flickr.com/photos/31834080@N05/4580433777/ Code:
import processing.opengl.*;
import hypermedia.video.*;
PImage fade;
OpenCV opencv;
void setup() {
background (255);
size( 640, 480, OPENGL);
noStroke();
frameRate (25);
opencv = new OpenCV( this );
opencv.movie( "hand.mov", width, height );
opencv.convert(OpenCV.GRAY);
}
void draw() {
background (255);
opencv.read();
opencv.threshold (180);
Blob[] blobs = opencv.blobs (10,width*height/2,100,true, 50);
for( int i=0; i<blobs.length; i++ ){
fill (220,10);
stroke (100,50);
beginShape (TRIANGLE_STRIP);
for (int j = i+1; j < blobs.length; j++) {
vertex (blobs[i].centroid.x,blobs[i].centroid.y);
vertex (blobs[j].centroid.x,blobs[j].centroid.y);
}
endShape (CLOSE);
ellipseMode (CENTER);
noStroke ();
fill (100,50);
ellipse (blobs[i].centroid.x , blobs[i].centroid.y,40,40);
fill (0,250);
}
tint (255,255,255,210);
fade = get(0,0,width,height);
image (fade,0,0);
noTint();
}