First of all, I am completely new to this (2 days experience). I have been working with Arduino (basic), so I am somewhat familiar with the interface and syntaxes of Processing.
Now for my problem:
I want to create an image that consists of a vertical line of pixels that is taken each frame, and then added to the right of the previous. I already managed to do this with this code.
import processing.video.*;
Capture myCapture;
int a=0;
int i=0;
int hlf=1200; //height of camera simage
int blf=1600; //width of camera image
int bsn = 10; // with of pixels cut
int hsn = hlf; //height screen
int fpss= 100; //each xx of a second, a frame is taken
int xpl=blf-bsn; //x location of cut
int fpscherm = ((xpl)/bsn); //frames per screen
int tijd=((blf/bsn)*fpss)/1000;
void setup()
{
size(blf, hlf);
//print(myCapture.list());
myCapture = new Capture(this, width, height, 30);
print(tijd);
}
void captureEvent(Capture myCapture) {
myCapture.read();
}
void draw() {
i=0;
a=0;
for(i = 0; i < fpscherm; i = i+1)
{
myCapture.crop(800, 0, bsn, hsn); //(xplaats van snede, yplaats van snede, breedte snede, hoogte snede)
image(myCapture, xpl, 0, bsn,hsn);
PImage myImage = get(xpl,0,bsn,hsn);
image(myImage, a, 0);
a=a+bsn;
delay(fpss);
}
saveFrame("####.jpg");
}
This creates an odd-looking image that is in fact the same principle of a photo finish camera.
If you run the code, you will see what i am getting at. I need this to study crowd movements in top view.
Now the hard part: I want those pixels per line of 10 pixels of which the colour remains the same compared to the previous (with some tolerance) to be coloured white. This is some kind of background substraction.