Hi All,
First thing's first, I am a very inexperienced programmer, so apologies if any of this is a stupid question! :)
I have been trying unsucessfully to get openCV itself to work in processing with Windows 7. Instead, I found the blobtracker library, and was able to find a blob tracking example. I think I added live capture into it without much trouble, and it works, sort of.
What I need: A program that tracks one person (the closest to the camera) in a space and follows where they go and how fast they move. I guess without openCV it might be hard to do face detection? I am not sure, but that would be ideal. Is there a way to make it work with the blobtracker library?
Otherwise, how would I modify the following code to track people coming up to the project in the space and narrow it down to one person?
- /*
- * Blobscanner Processing library
- * by Antonio Molinaro - 02/01/2011.
- * For each blob in the image computes
- * and prints the center of mass
- * coordinates x y to the console
- * and to the screeen.
- *
- *
- */
- import Blobscanner.*;
- import processing.video.*;
- Capture cam;
- Detector bd;
- PImage img;
- PFont f = createFont("", 10);
- void setup(){
- size(640, 480);
- String[] cameras = Capture.list();
- if (cameras.length == 0) {
- println("There are no cameras available for capture.");
- exit();
- } else {
- println("Available cameras:");
- for (int i = 0; i < cameras.length; i++) {
- println(cameras[i]);
- }
- cam = new Capture(this, 320, 240, cameras[0]);
- cam.start();
- // You can get the list of resolutions (width x height x fps)
- // supported capture device by calling the resolutions()
- // method. It must be called after creating the capture
- // object.
- Resolution[] res = cam.resolutions();
- println("Supported resolutions:");
- for (int i = 0; i < res.length; i++) {
- println(res[i].width + "x" + res[i].height + ", " +
- res[i].fps + " fps (" + res[i].fpsString +")");
- }
- }
- //img = loadImage("blobs.jpg");
- cam.filter(THRESHOLD);
- textFont(f, 10);
- bd = new Detector( this, 0, 0, cam.width, cam.height, 255 );
- }
- void draw(){
- if (cam.available() == true) {
- cam.read();
- image(cam, 0, 0);
- }
- bd.imageFindBlobs(cam);
- //This call is indispensable for nearly all the other methods
- //(please check javadoc).
- bd.loadBlobsFeatures();
- //This methods needs to be called before to call
- //findCentroids(booleaan, boolean) methods.
- bd.weightBlobs(false);
- //Computes the blob center of mass. If the first argument is true,
- //prints the center of mass coordinates x y to the console. If the
- //second argument is true, draws a point at the center of mass coordinates.
- bd.findCentroids(false, false);
- stroke(255, 100, 0);
- //For each blob in the image..
- for(int i = 0; i < bd.getBlobsNumber(); i++) {
- stroke(0, 255, 0);
- strokeWeight(5);
- //...computes and prints the centroid coordinates x y to the console...
- println("BLOB " + (i+1) + " CENTROID X COORDINATE IS " + bd.getCentroidX(i));
- println("BLOB " + (i+1) + " CENTROID Y COORDINATE IS " + bd.getCentroidY(i));
- println("\n");
- //...and draws a point to their location.
- point(bd.getCentroidX(i), bd.getCentroidY(i));
- //Write coordinate to the screen.
- fill(255, 0, 0);
- text("x-> " + bd.getCentroidX(i) + "\n" + "y-> " + bd.getCentroidY(i), bd.getCentroidX(i), bd.getCentroidY(i)-7);
- }
- }
In theory, I will be having Processing interpreting the movements of the user and sending them to Arduino to execute some code.
Thanks in advance!!!
1