how to get better frameRate in motion detection

HI, I'm using following example for motion detection https://github.com/shiffman/LearningProcessing/blob/master/chp16_video/example_16_13_MotionPixels/example_16_13_MotionPixels.pde

This example was performed with camera(320,240) and its work nice but my canvas size is (1280,720) so the frameRate is getting very low near by 8 to 10fps and if i use camera with (640,480) then the feed is displaying twice seems like it duplicating.

i want to display my live feed in full resolution which is (1280,720) so how can i scale up my camera which is (640,480) to (1280,720) ? (frameRate can be better with that resolution?)



// Learning Processing
// Daniel Shiffman
// http://www.learningprocessing.com

// Example 16-13: Simple motion detection

import processing.video.*;
// Variable for capture device
Capture video;
// Previous Frame
PImage prevFrame;
// How different must a pixel be to be a "motion" pixel
float threshold = 50;

void setup() {
  size(1280, 720, P3D);
  video = new Capture(this, 640,480, 30);
  video.start();
  // Create an empty image the same size as the video
  prevFrame = createImage(video.width, video.height, RGB);
}

void captureEvent(Capture video) {
  // Save previous frame for motion detection!!
  prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); // Before we read the new frame, we always save the previous frame for comparison!
  prevFrame.updatePixels();  // Read image from the camera
  video.read();
}

void draw() {
  background(255);
  //pushMatrix();
  //scale(2.0);

  loadPixels();
  video.loadPixels();
  prevFrame.loadPixels();

  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x ++ ) {
  for (int y = 0; y < video.height; y ++ ) {

//  for (int x = 440; x < 840; x ++ ) {
//   for (int y = 0; y < 400; y ++ ) {

      int loc = x + y*video.width;            // Step 1, what is the 1D pixel location
      color current = video.pixels[loc];      // Step 2, what is the current color
      color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

      // Step 4, compare colors (previous vs. current)
      float r1 = red(current); 
      float g1 = green(current); 
      float b1 = blue(current);
      float r2 = red(previous); 
      float g2 = green(previous); 
      float b2 = blue(previous);
      float diff = dist(r1, g1, b1, r2, g2, b2);

      // Step 5, How different are the colors?
      // If the color at that pixel has changed, then there is motion at that pixel.
      if (diff > threshold) { 
        // If motion, display black 
        
        pixels[loc] = color(0);
      } else {
        // If not, display white
        pixels[loc] = color(255);
      }
    }
  }
  updatePixels();
  
//popMatrix();

  stroke(255, 0, 0);
  rect(440, 0, 400, 400);
  surface.setTitle((int) frameRate + " motion");
}

Answers

  • Don't loop over every pixel.

  • edited May 2017

    @geekself108 -- as @clankiller points out, you can sample your motion checks. If you have a video feed 1280x720, and you want to check for motion on the equivalent of a video 320x240, then:

    1280 / 320 = 4
    720 / 240 = 3

    ... so sample every 4th pixel horizontally and every 3rd pixel vertically. That's 320x240 pixels:

    // Begin loop to walk through selected pixels
    for (int x = 0; x < video.width; x=x+4 ) {
    for (int y = 0; y < video.height; y=y+3 ) {
    
  • edited May 2017

    @clankill3r and @jeremydouglass thanks for your response.

    as per your suggestion i give it try and now the frameRate is normal but still i have an issue with duplicating live feed and its not in whole screen i mean not related to canvas size.

    what i am looking for is my live feed should be visible in (1280,720) and i want to detect motion in particular regions for that i use this below code

    // Begin loop to walk through selected pixels
    for (int x = 440; x < 840; x= x+4  ) {
    for (int y = 0; y < 400; y=y+3 ) {
    

    can you help me to scale up this pixels into (1280,720) ? it will be hard to debug if feed is not as canvas size.

    motion duplicate image

  • Let's say you are scaling all your dimension by a factor of half. Let's say you do it in the following way:

    final float FACTOR=0.5;
    PImage img=loadImage(...);
    PImage copy=img.get();   // 400 x 600 px
    copy.resize(img.width*FACTOR,img.height*FACTOR);
    
    //NOW apply edge detection
    //For any pixel of interest, you will scale them up (the reverse of the initial operation)
    PVector[] features = detectFeatures(copy);
    
    
    for(int i=0;i<features.length;i++){
      features.mult(1/FACTOR);
    }
    
    image(img,0,0);  //Draw image original size
    
    fill(random(100,200));
    noStroke();
    for(PVector pv:features){
      ellipse(pv.x,pv.y,15,15);   //Draw pictures on top of original size
    }
    

    Notice this is partial code and untested. This code takes the original image, scales it by half in both dimensions, extract features (For example, edges) and then it scale them up. Last step is to draw the features on top of the initial high resolution image.

    Kf

  • edited May 2017

    @kfrajer Thanks for your response.

    as u guide me i tried to resize my live camera feed from (320,240) to (640,480) but i'm not able to resize it. its gives me same size of camera feed i don't know why

    
    import processing.video.*;
    Capture video;
    PImage frameResize;
    
    void setup() {
      size(640, 480);
      video = new Capture(this, 320, 240, 30);
      video.start();
    
      frameResize = createImage(width, height, RGB);
    }
    
    void captureEvent(Capture video) {
    
      video.read();
    
      frameResize.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); 
      frameResize.resize(640, 480);
      //frameResize.updatePixels();
    }
    
    void draw() {
      background(0);
    
      loadPixels();
      video.loadPixels();
    
      frameResize.loadPixels();
      frameResize.updatePixels();
      image(frameResize, 0, 0);
    }
    
    

    note: i know that i can directly resize it by this in draw but it will not help me.

     
    image(video,0,0,width,height);
    
  • What do you mean you are not able to resize it?

    Suggestion: frameResize=video.get(); in line 17

    OR

    void draw() {
      image(video, 0, 0,width,height);
      loadPixels();
    
      //(... HERE work with your scaled-up image buffer)
      loadPixels();  
    }
    

    Kf

  • edited May 2017

    @kfrajer Thanks for help.

    i changed the line and it worked as i wanted. Thank you.

    but when i tried to applied that in my main code but i'm getting array index out of bounds exception at line 59 color current = firstFrame.pixels[loc];

    maybe i'm missing something? because when i'm displaying only the firstFrame its resize and working good. but i try to display the motion its gives me error.

    
    // motionPixels_demo_cam
    
    import processing.video.*;
    // Variable for capture device
    Capture video;
    
    // Previous Frame & first Frame
    PImage prevFrame, firstFrame;
    
    // How different must a pixel be to be a "motion" pixel
    float threshold = 50;
    boolean which_play = true;
    
    void setup() {
      size(640, 480);
      video = new Capture(this, 320, 240, 30);
      video.start();
    
      firstFrame = createImage(width, height, RGB);
      firstFrame = video.get();
      firstFrame.resize(width, height);
    
      // Create an empty image the same size as the video
      prevFrame = createImage(width, height, RGB);
    }
    
    void captureEvent(Capture video) {
      //save the previous frame
      prevFrame=firstFrame.get();
      //prevFrame.resize(width, height);
      prevFrame.updatePixels();
    
      //live feed camera
      video.read();
    
      // read video frame and resize it 
      firstFrame = video.get();
      firstFrame.resize(width, height);
      firstFrame.updatePixels();
    }
    
    void draw() {
      background(0);
    
      loadPixels();
      video.loadPixels();
      firstFrame.loadPixels();
      prevFrame.loadPixels();
    
      if (which_play == false) {
        // displaying the resized live feed from camera
    
        image(firstFrame, 0, 0);
    
      } else if (which_play == true) {
    
         // calculate the motion
    
        // Begin loop to walk through every pixel
        for (int x = 0; x < firstFrame.width; x ++ ) {
          for (int y = 0; y < firstFrame.height; y ++ ) {
    
            int loc = x + y*firstFrame.width;            // Step 1, what is the 1D pixel location
            color current = firstFrame.pixels[loc];      // Step 2, what is the current color
            color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color
    
            // Step 4, compare colors (previous vs. current)
            float r1 = red(current); 
            float g1 = green(current); 
            float b1 = blue(current);
            float r2 = red(previous); 
            float g2 = green(previous); 
            float b2 = blue(previous);
            float diff = dist(r1, g1, b1, r2, g2, b2);
    
            // Step 5, How different are the colors?
            // If the color at that pixel has changed, then there is motion at that pixel.
            if (diff > threshold) { 
              // If motion, display black
              pixels[loc] = color(0);
            } else {
              // If not, display white
              pixels[loc] = color(255);
            }
          }
        }
        updatePixels();
      }
    }
    
    void keyPressed() {
      if (keyCode == UP) {
        which_play = true;
      }
    
      if (keyCode == DOWN) {
        which_play = false;
      }
    }
    
    
    
  • Ok, so it seems that there is a conflict between the drawing thread and the capture loading thread. I will present two codes. The first one is a slightly modification of your previous post. It only works if you don't resize the images, which it is not convenient in your situation. Notice the size of the images and capture objects. They are the same size. Thus, resize has no effect (I didn't remove them in the code)

    Notice this code does not work if you half the dimension of the capture object.

    In the second code I took a different approach. I defeated the race condition between the two threads by using noLoop() and triggering the draw() method after a new image is loaded. This might work for you, or at least it will show you a different approach.

    There is a third approach which it is a more standard. I believe it is provided as one of the examples of the video library under the Capture folder. In this example, instead of using firstFrame, prevFrame and current grahpics object (the one handle by draw), you only use prevFrame and the current graphics object.

    Kf

    FIRST CODE Only works from 640x480 into 640x480

    // motionPixels_demo_cam
    
    import processing.video.*;
    // Variable for capture device
    Capture video;
    
    // Previous Frame & first Frame
    PImage prevFrame, firstFrame;
    
    // How different must a pixel be to be a "motion" pixel
    float threshold = 50;
    boolean which_play = true;
    
    void setup() {
      size(640, 480);
      video = new Capture(this, 640,480, 30);
      video.start();
    }
    
    void captureEvent(Capture video) {
    
      println("Video triggered @" + frameCount);
    
      if (firstFrame!=null) {
        prevFrame=createImage(width, height, RGB);
        prevFrame=firstFrame.get();
      }
    
      video.read();
    
      firstFrame=createImage(width, height, RGB);
      firstFrame = video.get();
      firstFrame.resize(width, height);
    }
    
    void draw() {
      //@@ background(0); 
    
      if (prevFrame==null) {
        print("First frame " + firstFrame);
        println(" Prev  frame " + prevFrame);
        return;
      }
    
      if (which_play == false) {
        // displaying the resized live feed from camera
        image(firstFrame, 0, 0);
      } else if (which_play == true) {
    
        loadPixels();
        //@@ video.loadPixels();
        firstFrame.loadPixels();
        prevFrame.loadPixels();
        // calculate the motion
    
        print("****frame " + firstFrame.width+" "+firstFrame.height);
        println(" Prev  frame " + prevFrame.width+" "+prevFrame.height);
    
        // Begin loop to walk through every pixel
        for (int x = 0; x < firstFrame.width; x ++ ) {
          for (int y = 0; y < firstFrame.height; y ++ ) {
    
            int loc = x + y*firstFrame.width;            // Step 1, what is the 1D pixel location
            color current = firstFrame.pixels[loc];      // Step 2, what is the current color
            color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color
    
            // Step 4, compare colors (previous vs. current)
            float r1 = red(current); 
            float g1 = green(current); 
            float b1 = blue(current);
            float r2 = red(previous); 
            float g2 = green(previous); 
            float b2 = blue(previous);
            float diff = dist(r1, g1, b1, r2, g2, b2);
    
            // Step 5, How different are the colors?
            // If the color at that pixel has changed, then there is motion at that pixel.
            if (diff > threshold) { 
              // If motion, display black
              pixels[loc] = color(0);
            } else {
              // If not, display white
              pixels[loc] = color(255);
            }
          }
        }
        updatePixels();
      }
    }
    
    void keyPressed() {
      if (keyCode == UP) {
        which_play = true;
      }
    
      if (keyCode == DOWN) {
        which_play = false;
      }
    }
    

    SECOND CODE Works from 320,240 into 640x480

    // motionPixels_demo_cam
    
    final color kBLACK=color(0);
    final color kWHITE=color(255);
    
    import processing.video.*;
    // Variable for capture device
    Capture video;
    
    // Previous Frame & first Frame
    PImage prevFrame, firstFrame;
    
    // How different must a pixel be to be a "motion" pixel
    float threshold = 50;
    boolean which_play;
    
    void setup() {
      size(640, 480);
      video = new Capture(this, 320, 240, 30);
      video.start();
    
      which_play = true;
      surface.setTitle("Mode Differential");
    
      prevFrame=firstFrame=createImage(width, height, RGB);
      noLoop();
    }
    
    void captureEvent(Capture video) {
    
      if (firstFrame!=null) {
        prevFrame=firstFrame;//.get();
      }
    
      video.read();
      firstFrame = video.get();
      redraw();
    }
    
    
    void draw() {
    
      if (which_play == true) {
        firstFrame.loadPixels();
        prevFrame.loadPixels();
    
        processFrameDifferential();
    
        firstFrame.updatePixels();
        image(firstFrame, 0, 0, width, height);
      } else {
        image(video, 0, 0, width, height);
      }
    }
    
    void processFrameDifferential() {
      for (int x = 0; x < firstFrame.width; x ++ ) {
        for (int y = 0; y < firstFrame.height; y ++ ) {
    
          int loc = x + y*firstFrame.width;            // Step 1, what is the 1D pixel location
          color current = firstFrame.pixels[loc];      // Step 2, what is the current color
          color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color
    
          // Step 4, compare colors (previous vs. current)
          float r1 = (current>>16)&0xff; 
          float g1 = (current>>8)&0xff;  
          float b1 = (current>>0)&0xff; 
          float r2 = (previous>>8)&0xff; 
          float g2 = (previous>>0)&0xff; 
          float b2 = (previous>>16)&0xff;
          float diff = dist(r1, g1, b1, r2, g2, b2);
    
          // Step 5, How different are the colors?
          // If the color at that pixel has changed, then there is motion at that pixel.
          if (diff > threshold) 
            firstFrame.pixels[loc] = kBLACK;
          else
            firstFrame.pixels[loc] = kWHITE;
        }
      }
    }
    
    
    
    void keyPressed() {
      if (keyCode == UP) {
        which_play = true;
        surface.setTitle("Mode Differential");
      }
    
      if (keyCode == DOWN) {
        which_play = false;
        surface.setTitle("Mode Normal");
      }
    }
    
  • edited May 2017

    @kfrajer Thanks for your help.

    maybe your first code : will not work out for me. because my FINAL canvas size will be (1920,1080) OR (1080,1920) ( and most-properly it will be (1080,1920) ).

    & in SECOND CODE it working without any error but at some frame it feels like the frame is blinking and because it, its seems like it not calculating the motion or pixels difference.

    showing the back to back two frames. 1st frame:

    111

    2nd frame:

    112

    i don't get why this is happening

Sign In or Register to comment.