Substitute for get(); when using processing video.

edited January 2018 in Library Questions

I'm trying to figure out a way to get a section of pixels from a video and then be able to display that section of pixels how I chose. For example capturing video though built in camera loading in the pixels and then being able to draw a section of those pixels. So for instance you could split the video in to a 2x2 grid and then though getting the sections of pixels you could display the top right section in the bottom left section.

I have done this with images quite easily with the get(); function. This function is unavailable with video. I have also seen that Capture.crop no longer exists in the video library.

Any suggestions would be great!

Tagged:

Answers

  • edited March 2015 Answer ✓
  • edited March 2015

    Thanks for your reply! I now see how that relates. I found a useful post on stack overflow http://stackoverflow.com/questions/17705781/video-delay-buffer-in-processing-2-0

    I ended up extending that by copying the previous frame by using createImage() and image.copy(). I then could use a for loop to iterate through the PImage array and get sections from that as they were delayed images not video. They are only delayed by 2 frames.

    Here is my test:

        /*Substitute for get(); when using processing video.
        Elliot 25/03/15
        Reference: http://stackoverflow.com/questions/17705781/video-delay-buffer-in-processing-2-0
        */
        import processing.video.*;
    
            Capture cam;
            PImage image;
            PImage[][][] newImage = new PImage [3][3][3];
            int w = 1280;
            int h = 720;
            int numFrames = 3;
            int iWrite = 0, iRead = 1;
    
    
            void setup() {
              size(700, 700);
              cam = new Capture(this, w, h);
              cam.start();
              //newImage = new PImage[nFrames];
            }
    
            void draw() {
              if (cam.available()) {
                cam.read();
                PImage image = createImage(cam.width, cam.height, RGB);
                image.copy(cam, 0, 0, cam.width, cam.height, 0, 0, cam.width, cam.height);
                image.resize(900, 600);
                for (int i = 0; i < 3; i++) {
                  for (int j = 0; j < 3; j++) {
                    newImage[iWrite][i][j] = image.get(i*300, j*200, 300, 200);
                  }
                }
    
                 for (int i = 0; i < 2; i++) {
                    for (int j = 0; j < 2; j++) {
                      if (newImage[iRead][i][j] != null) {
                        //image(newImage[iRead][i][j], i*300/2, j*200/2);
    
                        image(newImage[iRead][i][j], 300, 200);
                        image(newImage[iRead][i][j], 150, 200);
                        image(newImage[iRead][i][j], 0, 200);
                        image(newImage[iRead][i][j], 0, 100);
                        image(newImage[iRead][i][j], 0, 60);
                        image(newImage[iRead][i][j], 100, 150);
                        image(newImage[iRead][i][j], 200, 300);
    
                        // image(buffer[iRead], 50, 50);
                      }
                      iWrite++;
                      iRead++;
                      if (iRead >= numFrames-1) {
                        iRead = 0;
                      }
                      if (iWrite >= numFrames-1) {
                        iWrite = 0;
                      }
                    }
                  }
                }
            }
    
Sign In or Register to comment.