Custom frame size using Frame Differencing

Hiya there, I am currently trying to make an application that has half the screen displaying a 'motion capture' image using frame differencing, and the other half to be just a regular camera. I have a separate application capturing the screen so I only need to preview the image, I am currently using Ketai Cam as shown down below. The Frame differencing is using some code from the Coding Train () which works well on my PC but transferring it to Android seems problematic.

Currently the image wraps itself along the screen and will only allow size formats equal to the camera resolutions, it also seems impossible to resize or transform apart from making the screen size itself smaller. I have linked an image of the current output. It's a mix up of various ideas and pieces of code at the moment but if anyone has an advice or ideas on any part of it I would massively appreciate it.

import ketai.camera.*;
import processing.video.*;

KetaiCamera cam;

PImage prev;

float threshold = 25;

float motionX = 0;
float motionY = 0;

float lerpX = 0;
float lerpY = 0;

void setup() {
  //screen size of phone
  size(2560, 1440);
  imageMode(CENTER);
  orientation(LANDSCAPE);
  prev = createImage(1920, 1080, RGB);
  cam = new KetaiCamera(this, 1920, 1080, 24);

}



void draw()  {
  cam.loadPixels();
  prev.loadPixels();


  int count = 0;

  float avgX = 0;
  float avgY = 0;

  loadPixels();
  // Begin loop to walk through every pixel
  for (int x = 0; x < cam.width; x++ ) {
    for (int y = 0; y < cam.height; y++ ) {
      int loc = x + y * cam.width;
      // What is current color
      color currentColor = cam.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      color prevColor = prev.pixels[loc];
      float r2 = red(prevColor);
      float g2 = green(prevColor);
      float b2 = blue(prevColor);

      float d = distSq(r1, g1, b1, r2, g2, b2);

      if (d < threshold*threshold) {
        //stroke(255);
        //strokeWeight(1);
        //point(x, y);
        avgX += x;
        avgY += y;
        count++;
        pixels[loc] = color(255);
      } else {
        pixels[loc] = color(0);
      }
    }

  }

  updatePixels();

  //println(mouseX, threshold);
}

float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}


void onCameraPreviewEvent()
{
  cam.read();
}

// start/stop camera preview by tapping the screen
void mousePressed()
{
  if (cam.isStarted())
  {
    cam.stop();
  }
  else
    cam.start();
}
void keyPressed() {
  if (key == CODED) {
    if (keyCode == MENU) {
      if (cam.isFlashEnabled())
        cam.disableFlash();
      else
        cam.enableFlash();
    }
  }
}![Screenshot_20171030-222740](https://forum.processing.org/two/uploads/imageupload/158/Z5115UH13W6X.png "Screenshot_20171030-222740")

Answers

  • I am guessing you don't see any image in your device, do you? In your code, you need to call this next line in draw:

    image(cam, width/4,height/2, width/2,height);

    to show the image in half of the sketch. The line above assumes you are using imageMode set to CENTER. For the differentiation, you will need to create a second buffer where you store the differentiation. Then you draw this buffer in the second half of the sketch, similar to what I have shown above.

    This code is untested in Android. Tell us if you run into any issues.

    Kf

  • edited November 1

    Hi, I think I've narrowed down the issue slightly, when the size under setup is the same as the camera I am able to produce the differentiated image, yet as soon as I format it to fullscreen (2560 x 1440) then the information gets stretched across the entire screen on the y axis and is compressed on the x axis, the two images below will explain what I mean, and here are the two pieces of code for each instance.

    Image 1

    Image 1 Code-

    void setup() { 
      imageMode(CENTER);
      orientation(LANDSCAPE);
     ** size(640,480);**
      prev = createImage(640, 480, RGB);
      cam = new KetaiCamera(this, 640, 480, 24);
      cam.start();
    
    
    }
    void mousePressed() {
      {if (cam.isStarted())
      { 
      }
      else
        cam.start();
    }
      if (key == CODED) {
        if (keyCode == MENU) {
          if (cam.isFlashEnabled())
            cam.disableFlash();
          else
            cam.enableFlash();
        }
      }
    }
    
    void onCameraPreviewEvent(KetaiCamera cam) {
      prev.copy(cam, 0, 0, 640, 480, 0, 0, 640, 480);
      prev.updatePixels();
      cam.read();
    }
    

    Image 2

    Image 2 Code-

    void setup() { 
      imageMode(CENTER);
      orientation(LANDSCAPE);
      **size(2560,1440);**
      prev = createImage(640, 480, RGB);
      cam = new KetaiCamera(this, 640, 480, 24);
      cam.start();
    
    
    }
    void mousePressed() {
      {if (cam.isStarted())
      { 
      }
      else
        cam.start();
    }
      if (key == CODED) {
        if (keyCode == MENU) {
          if (cam.isFlashEnabled())
            cam.disableFlash();
          else
            cam.enableFlash();
        }
      }
    }
    
    void onCameraPreviewEvent(KetaiCamera cam) {
      prev.copy(cam, 0, 0, 640, 480, 0, 0, 640, 480);
      prev.updatePixels();
      cam.read();
    }
    
    void draw() {
      image(cam, width/4,height/2, width/2,height);
      cam.loadPixels();
      prev.loadPixels();
    

    The rest of the code is the same as the original post, thank you for the previous feedback and any advice would be appreciated!

  • @kfrager the main problem I seem to be having is scaling the differentiated data, as changing the size() and using scale() does not work, how would I go about drawing this in a buffer?

  • edited November 4 Answer ✓

    Here is a tested demo using Android Mode 4.0 and Ketai v.14 and P3.3.6, uses-sdk android:minSdkVersion="17" android:targetSdkVersion="26"

    Notice that even with a lower ketai cam resolution, the app lags... or that is what I observe in my old phone Galaxy S4.

    Notice you can adjust the threshold for the differentiation operation by touching along the x axis in landscape mode. I find that using values less than 30 allows you to see edges or patterns.

    Kf

    import ketai.camera.*;
    
    
    //===========================================================================
    // FINAL FIELDS:
    
    
    //===========================================================================
    // GLOBAL VARIABLES:
    
    
    KetaiCamera cam; 
    PImage prev;
    PImage curr;
    float threshold = 20;
    boolean procOn=true;
    
    //===========================================================================
    // PROCESSING DEFAULT FUNCTIONS:
    
    void setup() {
      fullScreen();
      orientation(LANDSCAPE);
    
      textAlign(CENTER, CENTER);
      rectMode(CENTER);
    
      fill(255);
      strokeWeight(2);
      textSize(32);
    
      prev = createImage(width/2, height, RGB);
      curr=createImage(width/2, height, RGB);
    
      cam = new KetaiCamera(this, 320, 240, 24);
      cam.start();
    }
    
    void draw() {  
      //background(0);
    
      if (cam != null && cam.isStarted()) {
        //image(cam, width/2, height/2, width, height);
    
        prev.loadPixels();
        //image(cam, width/4, height/2, width/2, height/2);
        image(cam, 0, 0, width/2, height);
        image(cam, width/2, 0, width/2, height);
        loadPixels();
    
        //if(!procOn)
        //return;
    
    
        // Begin loop to walk through every pixel
        for (int x = 0; x < width/2; x++ ) {
          for (int y = 0; y < height; y++ ) {
    
            int loc = x + y * width/2;  // PImage is half of dimensions specified by fullScreen
    
            color prevColor = prev.pixels[loc];
            float r2 = (prevColor>>16) & 0xff;
            float g2 = (prevColor>>16) & 0xff;
            float b2 = (prevColor>>16) & 0xff;
    
    
            loc = x + y * width;  // Sketch dimensions as defined by fullScreen
    
            // What is current color
            color currentColor = pixels[loc];
            float r1 = (currentColor>>16) & 0xff;
            float g1 = (currentColor>>8) & 0xff;
            float b1 = currentColor & 0xff;
    
    
    
            float d = dist(r1, g1, b1, r2, g2, b2);
            if (d < threshold) 
              pixels[loc] = color(255);
            else
              pixels[loc] = color(0);
          }
        }
    
        updatePixels();
      } 
    }
    
    
    void mousePressed() {
    
      procOn=!procOn;
      threshold=map(mouseX,0,width,0,400);
    
      println("Now="+procOn +" Thresh="+threshold);
    
      //if (cam.isStarted()) 
      //  cam.stop();
      //else
      //  cam.start();
    }
    
    void onCameraPreviewEvent() {
    
      prev.copy(curr, 0, 0, cam.width, cam.height, 0, 0, width/2, height);
      cam.read();
      curr=cam.get();
    }
    

    Keywords: edge-detection android-ketai android-camera

  • Oh my God, I now think that @kfrajer was programmed in Heaven, you've given me exactly what I needed to work with, I think I need to go away and read up on how to use booleans properly because I couldn't integrate the PImages into the code properly, thank you very much. I will post some images and a link to the app for any who are interested to show you what I am using it for! Thanks again!

Sign In or Register to comment.