Hello people. First of all I would like to apologize in case I posted this question in the wrong section. I'm kind of new on this forum and I swear I did my best when trying to find a suitable place for my little issue. I've also researched on different background subtraction techniques on the internet, but no luck. I couldn't find many implementations. So here we go

I'm currently involved on a little project aiming for developing an installation based on displays and cameras that are tracking what's going on. The part that I have to develop myself is about subtracting backgrounds, so when someone is staring at the display, her image would be somehow mirrored, but the background would be another one. Pretty similar to what Photo Booth does in OSX so to say.
The thing is that I would like my app to separate background from foreground keeping a rather high level of detail at the edges between both of them – i.e. not having a bunch of senseless lost pixels and dead areas where the original background could be seen.
I've been trying out the background subtraction example at the Processing examples folder, and its outcome is pretty similar to what I expect from my app. This obviously works as long as it shows the inverted colors and so, but whenever I try to paint the background black and keep the foreground as it is, it loses loads of detail level. Here I post the little mod that I made to the original example. If it wasn't because of some weird black lines appearing in the foreground, I'd be happy with it! Any suggestions on how to fix that?
Code:/**
* Background Subtraction
* by Golan Levin.
*
* Detect the presence of people and objects in the frame using a simple
* background-subtraction technique. To initialize the background, press a key.
*/
import processing.video.*;
int numPixels;
int[] backgroundPixels;
Capture video;
void setup() {
// Change size to 320 x 240 if too slow at 640 x 480
size(640, 480, P2D);
video = new Capture(this, width, height, 24);
numPixels = video.width * video.height;
fill(255);
noStroke();
smooth();
// Create array to store the background image
backgroundPixels = new int[numPixels];
// Make the pixels[] array available for direct manipulation
loadPixels();
}
void draw() {
if (video.available()) {
video.read(); // Read a new video frame
video.loadPixels(); // Make the pixels of video available
// Difference between the current frame and the stored background
int presenceSum = 0;
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
// Fetch the current color in that location, and also the color
// of the background in that spot
color currColor = video.pixels[i];
color bkgdColor = backgroundPixels[i];
// Extract the red, green, and blue components of the current pixel’s color
int currR = (currColor >> 16) & 0xFF;
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
// Extract the red, green, and blue components of the background pixel’s color
int bkgdR = (bkgdColor >> 16) & 0xFF;
int bkgdG = (bkgdColor >> 8) & 0xFF;
int bkgdB = bkgdColor & 0xFF;
// Compute the difference of the red, green, and blue values
int diffR = abs(currR - bkgdR);
int diffG = abs(currG - bkgdG);
int diffB = abs(currB - bkgdB);
// Add these differences to the running tally
presenceSum += diffR + diffG + diffB;
// MY CODE GOES FROM HERE...
int absoluteDifference = diffR + diffG + diffB;
int threshold = 50;
if(absoluteDifference < threshold) pixels[i] = 0xFF000000;
else pixels[i] = currColor;
// TO HERE!
// Render the difference image to the screen
//pixels[i] = color(diffR, diffG, diffB);
// The following line does the same thing much faster, but is more technical
//pixels[i] = 0xFF000000 | (diffR << 16) | (diffG << 8) | diffB;
}
updatePixels(); // Notify that the pixels[] array has changed
//println(presenceSum); // Print out the total amount of movement
}
}
// When a key is pressed, capture the background image into the backgroundPixels
// buffer, by copying each of the current frame’s pixels into it.
void keyPressed() {
video.loadPixels();
arraycopy(video.pixels, backgroundPixels);
}
I made some screenshots in order to document the behaviour:
1 - Before capturing BG

2 - After capturing BG

3 - Me on scene

4 - Random movement that clears up the background


So that's what I've been able to test so far. Any suggestions on how to improve foreground image quality I gotta admit that this level of detail at the edges would be sufficient for me, so it would be all about working on improving the FG, I guess. Correct me if I'm wrong.
In the other hand, I'm also opened to hardware alternatives. I've considered using stereo cameras, but I'm not so experienced on this field. Any of you have any advise about them Do you think that using them would ease this task
My apologies if I'm asking for an impossible thing! Still, any hint would be much appreciated. Thanks for everything in advance!