resizing a PImage in Processing 2 and 3

edited February 2017 in Android Mode

In an Android application that I've written using Processing 2 I have the following:

void setup() {
    if (cam == null) cam = new KetaiCamera(this, camWidth, camHeight, 30);    
}

void draw() {
    img = cam.get();
    img.loadPixels();
    img.updatePixels();
    // render the original image to screen, seems to be necessary before resizing...
    image(img, width/2, height/2, height, width);

    img.resize(7, 5);
    img.loadPixels();
    // render the scaled image over the original one
    image(img, width/2, height/2, height, width);
}

This has always worked fine for me while using the Processing 2 Android lib. Now I'm trying to port this to Processing 3. I had to change some details (e.g. width, height have become displayWidth, displayHeight) but what bothers me most is that I'm not able to display the scaled image over the original one. I suspecting it's related to resize() not working quite as expected but I don't know for sure. I am simply not seeing the image though I don't get an error. For testing purposes I checked the color of the first pixel of the resized image - I'm always getting 0, 0, 0 which is certainly not right but doesn't explain why I'm not seeing the image.

Does anyone know what might cause the problem? Have there been any changes between p2 and 3 regarding how resize() works?

Thanks very much, Stefan

Answers

  • edited February 2017

    @5tefan===

    try my code (some motifs from yours which explain why your code does nothing)

                import ketai.camera.*;
    
                KetaiCamera cam;
                int camWidth = 800;
                int camHeight = 600;
                PImage img;
                boolean r = false;
    
    
                 void settings(){
                   size(displayWidth,displayHeight);
                 }
                void setup() {
                  orientation(LANDSCAPE);
                  background(255,0,0);
                    if (cam == null) cam = new KetaiCamera(this, camWidth, camHeight, 30);
                    cam.start();//added!
                    imageMode(CENTER);//added
                }
                 void onCameraPreviewEvent()//added
                {
                  cam.read();
                }
    
        void draw() {
                  if(cam != null && cam.isStarted()){//added for being sure that there is a cam started
                    img = cam.get();
    
    
                    // render the original image to screen, seems to be necessary before resizing...//yes, it is necessary
    
                    image(img, width/2, height/2, height, width);
                 img.loadPixels();
                 if(!r){
                    img.resize(400, 300);
    
                 }
                   img.updatePixels();
                   background(255,0,0);
                    println(img.width);
                    // render the scaled image over the original one
                    image(img,width/2, height/2);//suppressed height && width params
    
    
                }else{//added
    
                 if (cam == null) cam = new KetaiCamera(this, camWidth, camHeight, 30);
                 cam.start();
    
    
                }
                }
    
                void mouseReleased(){//added to verify
                  if(!r){
                    r=true;
                  }else{
                    r=false;
                  }
                }
    
  • edited February 2017

    Ok, thanks for all your replies. Sorry, of course I forgot some code that I have in my app - opening the camera resp. setting the right permissions isn't my problem...

    @akenaton: I tried your code and I do indeed get an image. But it's not quite right. I've experienced the same problem earlier - see this comment: https://forum.processing.org/two/discussion/4123/#Comment_13862 Unfortunately this does not work anymore in p3. Using updatePixels() after resizing the image results in a re-scaled image being displayed but its pixels are simply garbage (the effect is probably not so evident if the original size is close to the scaled size. I'm somehow suspecting the scaled image simply contains the pixels of the old, not yet scaled image, not sure...).

    This is a screenshot of a test app. The upper half of the screen is the original hi-res image (a frame coming from the camera's videostream) and the lower half should be the down-scaled version of the upper half. As one can see it's not quite what I would have expected...

    Code:

    import ketai.camera.*;
    
    KetaiCamera cam;
    PImage img;
    
    void settings() {
      size(displayWidth, displayHeight);
    }
    void setup() {
      background(255, 0, 0);
      println(width + ":" + height);
      if (cam == null) cam = new KetaiCamera(this, 1024, 768, 30);
      cam.start();//added!
    }
    void onCameraPreviewEvent() {
      cam.read();
    }
    
    void draw() {
      background(0);
      if (cam != null && cam.isStarted()) {
        img = cam.get();
        image(img, 0, 0, width, height/2);
        img.resize(5, 5);
        // using loadPixels() instead of updatePixels() used to give the right result in p2 - using it in p3 prevents the scaled image from being displayed
        // img.loadPixels();
        // ok, image is there, but pixels are garbage
        img.updatePixels();
        // render image below the original one
        image(img, 0, height/2, width, height/2);
      }
    }
    

    In the next screenshot I'm using a static image and - surprise, surprise - updatePixels() after resize() works as expected:

    Code:

    PImage a;
    
    void setup() {
      size(displayWidth, displayHeight);
      a = loadImage("jelly.jpg");  // Load the image into the program  
      noLoop();  // Makes draw() only run once
    }
    
    void draw() {
      // Displays the image at its actual size at point (0,0)
      image(a, 0, 0, width, height/2);
    
      a.resize(5, 5);
      a.updatePixels();
      // Display the scaled image below the original
      image(a, 0, height/2, width, height/2);
    }
    

    Any clues?

    Thanks, Stefan

  • edited February 2017

    @5tefan=== what was the original size for your jelly?

    other question: in the first case orientation is landscape (default for ketai cam) in the second case orientation is portrait

  • @akenaton - the jelly image is 200 x 200 px big. The orientation in both examples is different as one's using KetaiCamera and producing the image from a camera frame. The other is made using an image from disk. Should it matter what orientation an image has?

  • the camera might not be returning the image using rgb but may be using some other colourspace like YUV

    some mention of that here:

    https://forum.processing.org/one/topic/ketaicamera.html

    http://www.akeric.com/blog/?tag=ketai

  • edited February 2017

    @5tefan===

    so your camera image is 1024/768, is landscape mode and you cannot compare with the jelly 200/200 image portrait mode result; in your code you are asking to display your image with width and height (from the display) parameters. (though i do agree also with @Koogs: it is possible that your cam works with yuv; i had once this kind of issue and had to write code for "transcoding": not so easy.) as for being sure about what you get, try to save the image from cam before resizing, go to photoshop and resize it with your values, 5/5 (not the same than 1024/768 which is not a square of course)- See the result. Or, instead of your jelly 200 put in the second code some image 1024/768, same res. And see the result (landscape mode, resize(5,5).

  • @koogs - great! thanks for these links. I already started a little detour which might have been solved in http://www.akeric.com/blog/?tag=ketai I had no idea about YUV decoding but in both links there's an implementation posted (more or less the same, it seems). I wondering about performance, nevertheless gonna give it a try (it seems it's the Android way anyway...).

  • Not sure if this is of any help to your issue and I don't have much time to test it, but this function might be of help to you from Ketai: public void decodeYUV420SP(byte[] yuv420sp)

    http://ketai.org/reference/camera/ketaicamera/

    Kf

  • @akenaton - it doesn't matter which size a static image has, it seems. A normal jpg gets properly scaled. I'm suspecting YUF is the culprit. Let's see...

  • edited February 2017

    @kfrajer - totally overlooked that method. That might indeed come handy. Thanks!

  • Ha, I had the same thing in the comment box, saved as a draft.

    That method has exactly no documentation. 8(

  • @5tefan=== not sure about YUV; It's possible; As i have already written some code for YUV/RGB conversion (with android native) i ll give a look at that ASAP.

  • I'm pretty stuck. That code in http://www.akeric.com/blog/?tag=ketai looked promising, yet didn't quite work for me. I did some changes, especially I passed in the processing applet instead of a context for the CameraSurfaceView constructor. This way I could construct the right calls to createImage and image(). That code worked except that onPreviewFrame never got called. (I've posted a question regarding this to stackoverflow too: stackoverflow.com/questions/42475910/android-app-using-processing-native-camera-onpreviewframe-not-called. Just now I've tried to replace the applet that I passed in as the argument to the constructor again with a context. But still no luck. Now surfaceCreated and surfaceChanged don't seem to be called either...

    Any ideas?

    Thanks, Stefan

Sign In or Register to comment.