We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi,
I'm currently trying to combine two processing sketches (BackgroundRemove and Blobdetection) to do the following with a webcam aimed at a street:
Only show the changed pixels (as a result the canvas is fully green but only moving objects are shown).
Use blobdetection to draw rectangles around each moving object.
For this I'm trying to combine the two scripts listed at the end of this post. The BackgroundRemove comes from http://learningprocessing.com/examples/chp16/example-16-12-BackgroundRemove The Blobdetection comes from http://www.v3ga.net/processing/BlobDetection/index-page-home.html
Both scripts seperately work fine, but I've been working for days to combine them and I can't seem to get it working. I think the solution would be to replace the video input in the Blob detection with the canvas input, but I can't figure out how to do that (or how to fix this otherwise).
Would anyone be able to help me here? Thanks!
sorry, the Code tags don't seem to work correctly. For a better view, here are the code Gists: https://gist.github.com/Thomvdm/fd4c412b6bb66a264728b1816c9b5a10 https://gist.github.com/Thomvdm/21d6e9d3df86e288cb8e36fd41bcaf29
| | | | | | |
// Example 16-12: Simple background removal
// Click the mouse to memorize a current background image
import processing.video.*;
// Variable for capture device
Capture video;
// Saved background
PImage backgroundImage;
// How different must a pixel be to be a foreground pixel
float threshold = 30;
void setup() {
size(640, 480);
video = new Capture(this, width, height);
video.start();
// Create an empty image the same size as the video
backgroundImage = createImage(video.width, video.height, RGB);
}
void captureEvent(Capture video) {
// Read image from the camera
video.read();
}
void draw() {
// We are looking at the video's pixels, the memorized backgroundImage's pixels, as well as accessing the display pixels.
// So we must loadPixels() for all!
loadPixels();
video.loadPixels();
backgroundImage.loadPixels();
// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x ++ ) {
for (int y = 0; y < video.height; y ++ ) {
int loc = x + y*video.width; // Step 1, what is the 1D pixel location
color fgColor = video.pixels[loc]; // Step 2, what is the foreground color
// Step 3, what is the background color
color bgColor = backgroundImage.pixels[loc];
// Step 4, compare the foreground and background color
float r1 = red(fgColor);
float g1 = green(fgColor);
float b1 = blue(fgColor);
float r2 = red(bgColor);
float g2 = green(bgColor);
float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);
// Step 5, Is the foreground color different from the background color
if (diff > threshold) {
// If so, display the foreground color
pixels[loc] = fgColor;
} else {
// If not, display green
pixels[loc] = color(0, 255, 0); // We could choose to replace the background pixels with something other than the color green!
}
}
}
updatePixels();
}
void mousePressed() {
// Copying the current frame of video into the backgroundImage object
// Note copy takes 5 arguments:
// The source image
// x, y, width, and height of region to be copied from the source
// x, y, width, and height of copy destination
backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
backgroundImage.updatePixels();
}
// - Super Fast Blur v1.1 by Mario Klingemann <http://incubator.quasimondo.com>
// - BlobDetection library
import processing.video.*;
import blobDetection.*;
Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;
// ==================================================
// setup()
// ==================================================
void setup()
{
// Size of applet
size(640, 480);
// Capture
cam = new Capture(this, 640, 480, 30);
// Comment the following line if you use Processing 1.5
cam.start();
// BlobDetection
// img which will be sent to detection (a smaller copy of the cam frame);
img = new PImage(80,60);
theBlobDetection = new BlobDetection(img.width, img.height);
theBlobDetection.setPosDiscrimination(true);
theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f;
}
// ==================================================
// captureEvent()
// ==================================================
void captureEvent(Capture cam)
{
cam.read();
newFrame = true;
}
// ==================================================
// draw()
// ==================================================
void draw()
{
if (newFrame)
{
newFrame=false;
image(cam,0,0,width,height);
img.copy(cam, 0, 0, cam.width, cam.height,
0, 0, img.width, img.height);
fastblur(img, 2);
theBlobDetection.computeBlobs(img.pixels);
drawBlobsAndEdges(true,true);
}
}
// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
noFill();
Blob b;
EdgeVertex eA,eB;
for (int n=0 ; n<theBlobDetection.getBlobNb() ; n++)
{
b=theBlobDetection.getBlob(n);
if (b!=null)
{
print("Blob #: ");
println(n);
print("X Coord of center: ");
println(b.x);
print("Y Coord of center: ");
println(b.y);
// Edges
if (drawEdges)
{
strokeWeight(3);
stroke(0,255,0);
for (int m=0;m<b.getEdgeNb();m++)
{
eA = b.getEdgeVertexA(m);
eB = b.getEdgeVertexB(m);
if (eA !=null && eB !=null)
line(
eA.x*width, eA.y*height,
eB.x*width, eB.y*height
);
}
}
// Blobs
if (drawBlobs)
{
strokeWeight(1);
stroke(255,0,0);
rect(
b.xMin*width,b.yMin*height,
b.w*width,b.h*height
);
}
}
}
}
// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann
// <http://incubator.quasimondo.com>
// ==================================================
void fastblur(PImage img,int radius)
{
if (radius<1){
return;
}
int w=img.width;
int h=img.height;
int wm=w-1;
int hm=h-1;
int wh=w*h;
int div=radius+radius+1;
int r[]=new int[wh];
int g[]=new int[wh];
int b[]=new int[wh];
int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;
int vmin[] = new int[max(w,h)];
int vmax[] = new int[max(w,h)];
int[] pix=img.pixels;
int dv[]=new int[256*div];
for (i=0;i<256*div;i++){
dv[i]=(i/div);
}
yw=yi=0;
for (y=0;y<h;y++){
rsum=gsum=bsum=0;
for(i=-radius;i<=radius;i++){
p=pix[yi+min(wm,max(i,0))];
rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0;x<w;x++){
r[yi]=dv[rsum];
g[yi]=dv[gsum];
b[yi]=dv[bsum];
if(y==0){
vmin[x]=min(x+radius+1,wm);
vmax[x]=max(x-radius,0);
}
p1=pix[yw+vmin[x]];
p2=pix[yw+vmax[x]];
rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}
for (x=0;x<w;x++){
rsum=gsum=bsum=0;
yp=-radius*w;
for(i=-radius;i<=radius;i++){
yi=max(0,yp)+x;
rsum+=r[yi];
gsum+=g[yi];
bsum+=b[yi];
yp+=w;
}
yi=x;
for (y=0;y<h;y++){
pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
if(x==0){
vmin[y]=min(y+radius+1,hm)*w;
vmax[y]=max(y-radius,0)*w;
}
p1=x+vmin[y];
p2=x+vmax[y];
rsum+=r[p1]-r[p2];
gsum+=g[p1]-g[p2];
bsum+=b[p1]-b[p2];
yi+=w;
}
}
}`
Answers
Please edit your post and format your code with highlight + CTRL-o
First -- did you run the two sketches separately and confirm that they worked for you before attempting to combine? never mind - you said they did.
Don't draw the output of background remove directly to the screen. Draw it to a PImage or PGraphics. Then pass that to your BlobDetection to get a list of blobs.
Thank you Jeremy. How would I go about to draw the output of background remove? I'm looking for something that allows me to copy/output the current canvas, but I can't seem to find anything that does that.
I think I could do it with something like
imgBackground = getCurrentCanvas(); blobImg.copy(cam, 0, 0, backgroundImage.width, backgroundImage.height, 0, 0, blobImg.width, blobImg.height); theBlobDetection.computeBlobs(blobImg.pixels);
However, I don't know how I could get the current Canvas (I'm new to Processing, so excuse me if this is something really simple)
Undocumented getGraphics() gets sketch's canvas, which is a PGraphics btW:
https://Processing.org/reference/PGraphics.html
If you need to take a PImage screenshot of it, just call get():
https://Processing.org/reference/get_.html
If you wanna copy an array, like pixels[] for example, to another 1, use arrayCopy():
https://Processing.org/reference/arrayCopy_.html
For pixels[], don't forget loadPixels() before and updatePixels() after that:
GoToLoop, that getGraphics() function was exactly what I was looking for, thanks a bunch!