We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hiya there, I am currently trying to make an application that has half the screen displaying a 'motion capture' image using frame differencing, and the other half to be just a regular camera. I have a separate application capturing the screen so I only need to preview the image, I am currently using Ketai Cam as shown down below. The Frame differencing is using some code from the Coding Train () which works well on my PC but transferring it to Android seems problematic.
Currently the image wraps itself along the screen and will only allow size formats equal to the camera resolutions, it also seems impossible to resize or transform apart from making the screen size itself smaller. I have linked an image of the current output. It's a mix up of various ideas and pieces of code at the moment but if anyone has an advice or ideas on any part of it I would massively appreciate it.
import ketai.camera.*;
import processing.video.*;
KetaiCamera cam;
PImage prev;
float threshold = 25;
float motionX = 0;
float motionY = 0;
float lerpX = 0;
float lerpY = 0;
void setup() {
//screen size of phone
size(2560, 1440);
imageMode(CENTER);
orientation(LANDSCAPE);
prev = createImage(1920, 1080, RGB);
cam = new KetaiCamera(this, 1920, 1080, 24);
}
void draw() {
cam.loadPixels();
prev.loadPixels();
int count = 0;
float avgX = 0;
float avgY = 0;
loadPixels();
// Begin loop to walk through every pixel
for (int x = 0; x < cam.width; x++ ) {
for (int y = 0; y < cam.height; y++ ) {
int loc = x + y * cam.width;
// What is current color
color currentColor = cam.pixels[loc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
color prevColor = prev.pixels[loc];
float r2 = red(prevColor);
float g2 = green(prevColor);
float b2 = blue(prevColor);
float d = distSq(r1, g1, b1, r2, g2, b2);
if (d < threshold*threshold) {
//stroke(255);
//strokeWeight(1);
//point(x, y);
avgX += x;
avgY += y;
count++;
pixels[loc] = color(255);
} else {
pixels[loc] = color(0);
}
}
}
updatePixels();
//println(mouseX, threshold);
}
float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
return d;
}
void onCameraPreviewEvent()
{
cam.read();
}
// start/stop camera preview by tapping the screen
void mousePressed()
{
if (cam.isStarted())
{
cam.stop();
}
else
cam.start();
}
void keyPressed() {
if (key == CODED) {
if (keyCode == MENU) {
if (cam.isFlashEnabled())
cam.disableFlash();
else
cam.enableFlash();
}
}
}![Screenshot_20171030-222740](https://forum.processing.org/two/uploads/imageupload/158/Z5115UH13W6X.png "Screenshot_20171030-222740")
Answers
I am guessing you don't see any image in your device, do you? In your code, you need to call this next line in draw:
image(cam, width/4,height/2, width/2,height);
to show the image in half of the sketch. The line above assumes you are using imageMode set to CENTER. For the differentiation, you will need to create a second buffer where you store the differentiation. Then you draw this buffer in the second half of the sketch, similar to what I have shown above.
This code is untested in Android. Tell us if you run into any issues.
Kf
Hi, I think I've narrowed down the issue slightly, when the size under setup is the same as the camera I am able to produce the differentiated image, yet as soon as I format it to fullscreen (2560 x 1440) then the information gets stretched across the entire screen on the y axis and is compressed on the x axis, the two images below will explain what I mean, and here are the two pieces of code for each instance.
Image 1 Code-
Image 2 Code-
The rest of the code is the same as the original post, thank you for the previous feedback and any advice would be appreciated!
@kfrager the main problem I seem to be having is scaling the differentiated data, as changing the size() and using scale() does not work, how would I go about drawing this in a buffer?
Here is a tested demo using Android Mode 4.0 and Ketai v.14 and P3.3.6, uses-sdk android:minSdkVersion="17" android:targetSdkVersion="26"
Notice that even with a lower ketai cam resolution, the app lags... or that is what I observe in my old phone Galaxy S4.
Notice you can adjust the threshold for the differentiation operation by touching along the x axis in landscape mode. I find that using values less than 30 allows you to see edges or patterns.
Kf
Keywords: edge-detection android-ketai android-camera
Oh my God, I now think that @kfrajer was programmed in Heaven, you've given me exactly what I needed to work with, I think I need to go away and read up on how to use booleans properly because I couldn't integrate the PImages into the code properly, thank you very much. I will post some images and a link to the app for any who are interested to show you what I am using it for! Thanks again!