Simulating long exposure shots on Processing

edited January 2018 in Library Questions

Hello everyone,

I am trying to create a sketch that simulates a long exposure shot captured on camera. I searched for smth similar on the web and found this reference which proposes to append different layers using different opacities. I did that with the following code, but it doesn't seem to quite do the job properly.

Anybody tried to do that before?

Any other suggestions?

Thanks for your time! :-)

// This patch proposes to simulate a long exposure shot using Processing.

import processing.video.*;

Capture video;

void setup() {
  size(640, 480);
  printArray(Capture.list());
  video = new Capture(this, 640, 480, 30);
  video.start();
}

void mousePressed()
{
  int steps = 10;  // also means amount of pictures taken and merged later
  long opac = 255 / steps; 

  for (int i = 1; i <= steps; i++) 
  {
    if (video.available())
    {
      video.read();
      image(video, 0, 0);
      tint(255, opac * i);    // changes the opacity every step
      delay (100);            // delay between each step
      print(i); 
      println(" " + opac * i);
    }
  }
}

void keyPressed()
{
  background(0);
}

void draw() {
  // Step 5. Display the video image.
}

Answers

  • Just figured that I had increasing opacity values as the step counter went up. Inverting the opacity value from ascending to descending produced my desired effect. changed

    for (int i = 1; i <= steps; i++) 
      {
        if (video.available())
        {
          video.read();
          image(video, 0, 0);
          tint(255, opac * i);    // changes the opacity every step
          delay (100);            // delay between each step
          print(i); 
          println(" " + opac * i);
        }
      }
    

    to

     for (int i = 1; i <= steps; i++) 
      {
        if (video.available())
        {
          video.read();
          tint(255, opac * (steps - i));    // changes the opacity every step
          delay (500);            // delay between each step
          print(i); 
          println(" " + opac * (steps - i));
        }
        image(video, 0, 0);
      }
    
  • It is still not polished. Doesn't seem to merge images properly. Maybe because the opacity declines in a linear way? Is there a way to change blending mode on Processing? anyone? thanks !

  • re:

    Is there a way to change blending mode on Processing?

    You probably don't need to freeze the sketch between frames in the mousePressed function and use delay() -- instead, you could just use draw and frameRate()

    Changing the opacity over time doesn't actually model how a long exposure photgraph works. Instead, set a fixed opacity and blendMode, and simple draw to the canvas without clearing. This will "expose" the canvas over time.

    Due to how float-based averaging works, some blend modes will never reach a perfectly black or perfectly bright frame, no matter how long they are shown a still exposure -- they will max out at ~10-240 instead of 0-255.

  • Hey Jeremy, thanks for your reply! I noticed in other posts that's possible to use framerate instead of using delay. I'm actually using it to be sure how long the capture will be. I'd like it to take around 3.2 seconds to do it all.

    I'm also trying to store each frame in an Array, but with no luck. I managed to record each frame in jpegs but i can't access the images on the array. I guess I'm missing something...

    Also I have to use a kinect as a webcam for what i'm developing, so I changed my original code a little bit.

    Also couldn't get the real difference between ArrayList and Array[]... :-(

    thanks again for your help !

    // trying to implement a long exposure feature using kinect rgb camera.
    // when trying to flick thru images on the array, it doesn't seem to be saving images properly on array.
    // 
    // p.s. - could use onboard webcam too, just have to use kinect because that's the 
    // camera installed 
    
    import org.openkinect.freenect.*;
    import org.openkinect.processing.*;
    Kinect kinect;
    
    
    int totalTime = 3200; // in miliseconds   
    int steps = 32;       // also means amount of pictures taken and merged later
    ArrayList<PImage> secondArray = new ArrayList<PImage>(steps+1);
    
    int delayPhotos = totalTime/ steps;
    long opac = 255 / steps; 
    
    void setup() {
      background(0);
      frameRate(15);
      size(640, 480);
    
    
      kinect = new Kinect(this);
      kinect.initVideo(); // inicia video rgb do kinect
    }
    
    void mousePressed()
    {
      PImage imageKinect = kinect.getVideoImage();
    
      for (int i = 1; i <= steps; i++) 
      {
        secondArray.add(imageKinect);           // saves frames no array
        imageKinect.save("image" + i + ".jpg"); // saves images to disk
        tint(255, opac * (steps - i));    // changes the opacity every step
        delay (delayPhotos);              // delay between each step
        blendMode(BLEND);
        print(i); 
        println(" " + opac * (steps - i));
        save("combined.jpg");
        image(imageKinect, 0, 0);
      }
    }
    
    void keyPressed()
    {
      println(key);
      if (key == 'p' || key == 'P') 
      {
        println("Prints sequence of images in Muybridge style");
        background(0);
    
        // trying temporarily for 32 images
        int w = 80;
        int h = 60; 
        int x = 0;
        int y = 0;
        tint(255, 255);
        image(secondArray.get(1), x, y, w, h);
        image(secondArray.get(5), 80, y, w, h);
        image(secondArray.get(8), 160, y, w, h);
        image(secondArray.get(15), 240, y, w, h);
        image(secondArray.get(20), 320, y, w, h);
        image(secondArray.get(22), 400, y, w, h);
    
        // error: can't get different images from array. seems to be
        //getting latest frame from camera, instead
    }
    }
    
    void draw() {
    
      //image(kinect.getVideoImage(), 0, 0);
    }
    
  • edited January 2018

    Can you be more specific than saying you didn't have luck, or you couldn't access the images? What happened when you tried?

    Edit: I see now in your comments that the frames all show the latest. (Please include this information outside your code so it's easy to read.)

    My guess is you need to copy the current frame into a new image, and then store that image in your ArrayList.

    And the difference between an array and an ArrayList is an ArrayList allows you to add elements to it, and it will automatically resize itself. Arrays are fixed-size.

  • edited January 2018

    Re:

    I'm actually using it to be sure how long the capture will be. I'd like it to take around 3.2 seconds to do it all.

    That is what frameRate() does -- and the sketch keeps running and accepts input if you don't use delay().

    If you set it to 10 in setup(), thats one frame every 100ms.

    frameRate(10);
    

    Then in your header create a global shutter flag and duration value:

    boolean shutterOpen = false;
    int duration = 3200; // 3.2 seconds
    int closeTime = 0;
    

    ...and in keyPressed() open the shutter and set a timer:

    if(key == 'p' || key == 'P'){
       shutterOpen = True;
       closeTime = millis() + duration;
    }
    

    ...and in draw() do something each moment the shutter is open -- draw is your loop:

    if(shutterOpen = True){
      // do something!
    }
    

    ...and don't forget in draw() to close the shutter once it gets to a draw frame when you are done.

    if(millis() > closeTime){
      shutterOpen = False;
    }
    

    With this approach, you could set the exposure duration for 30 seconds -- or 10 minutes -- and you could still still interact with the sketch, see a preview of the results forming on the screen, or hit cancel -- and the sketch would respond. With the delay approach, you just locked the sketch up for 10 minutes, and forcing it to quit entirely is your only escape.

  • Hey Jeremy, and kfrajer, thanks for your reply! I got what you mean by not having the sketch stuck inside the delay() loop. Thanks!

    Now I am considering the reference kfrajer posted earlier. I noticed they talk about a "circular loop" which involves the creation of an array with a know amount of elements. This way the sketch would be always storing the last X frames inside of it and showing it on screen.

    So I guess the scope of the project would be: 1. to keep a live image onscreen and waiting the hit of a button to start the "capture". 2. When a button is pressed, the image should be seen evolving on the screen showing the image trails as it records (during the shutterOpen time). 3. After finishing the capture time, the image should be kept on screen and recorded on a jpg file. 4. wait 5 seconds and go back to step 1.

    That's pretty much that.

    I guess I'm having some real beginner issues here, I'm currently trying to store images recorded from camera to the circular loop array. I'm trying to flick thru the array using mouseX coordinates. Any clues about what i'm doing wrong?

    Thanks you so much for your attention again! :-)

    import processing.video.*;   // import processing video library
    Capture video;            // video object
    PImage frames[];          // declares frames[] array
    PImage framesB[];         // declares framesB[] array
    int numFrames = 90;
    int pointer = 0;
    PImage tempImage;
    int frame_mouse = 0;
    
    void setup()
    {
      frameRate(10);
      size (320, 240);
      video = new Capture(this, width, height);
      frames = new PImage[numFrames];   // frames is an image array
      framesB = new PImage[numFrames];  // frames is an image array  
      video.start();
      tempImage = createImage(width, height, ARGB);
    }
    
    void draw()
    { 
      if (video.available())
      { 
        video.read();         // read image from camera
        tempImage = video;    // assigns video to tempImage
        pointer = (pointer-1 + frames.length ) % frames.length; // creates circular buffer
        //println(pointer);                                     // prints on console current pointer
        framesB[pointer] = tempImage;
      } 
    
      frame_mouse = constrain(mouseX, 0, numFrames);
      println ("mouseX " + frame_mouse);
    
      image(framesB[frame_mouse], 0, 0, width, height); // gives "NullPointerException"
    }
    
  • edited January 2018
    1. to keep a live image onscreen and waiting the hit of a button to start the "capture".
    2. When a button is pressed, the image [...]

    This is a concept of a photobooth. The question is how do display multiple images together in a single screen. In the links to the RGB delay post, I believe different tint values is applied to each image, working with the RGB channels - One image is based only on red, second image only on blue and third image on green - then they are plot together. In your case, I will try either applying a fix tint to each image before is drawn or you could re-define each image's alpha channel to certain fix value.

    Related to your post, the reason you are getting a NPE is very simple: You are trying to draw a image object that is null. You can fix this by using something simple like: if(framesB[frame_mouse]!=null){..}

    Related to showing certain frame on your circular buffer, as it is, the image being displayed depends on the mouse position and current value of the pointer variable which changes after a new frame is acquired. I don't think this is doing what you want it to do. Here below I show my version for you to explore.

    Kf

    //Processing Forum - Jan 22,2018
    //Summary: Long exposure photo booth (Partial implementation)
    //REFERENCES: https://forum.processing.org/two/discussion/25904/simulating-long-exposure-shots-on-processing#latest
    //Done by Kf
    
    //INSTRUCTIONS:
    //         *-- Photo booth with 4 options
    //         *-- Press 1 to show live video stream
    //         *-- Press 2 to capture long shot. After operation is completed, back to idle state
    //         *-- Press 3 to browse collected pictures based on mouse position
    //         *-- Press 4 to show long exposure: Overlying all capture images. 
    //         *--        [This operation needs to be implemented]
    
    //===========================================================================
    // IMPORTS:
    import processing.video.*;   // import processing video library
    
    //===========================================================================
    // FINAL FIELDS:
    final int N =10;
    final int IDLE=0;
    final int LOAD_SHOTS_FROM_CAM=1;
    final int SHOW_INDIVIDUAL_SHOTS=2;
    final int SHOW_LE_EFFECT=3;  //LE=>Long exposure effect
    
    //===========================================================================
    // GLOBAL VARIABLES:
    Capture video;            // video object
    PImage frames[];          // declares frames[] array
    
    int pointer = 0;
    PImage tempImage;
    int frame_mouse = 0;
    
    int state=IDLE;
    
    
    //===========================================================================
    // PROCESSING DEFAULT FUNCTIONS:
    
    void setup()
    {
      frameRate(10);
      size (320, 240);
      stroke(0);
      textSize(14);
      textAlign(CENTER,CENTER);
    
      video = new Capture(this, width, height);
      frames = new PImage[N];   // frames is an image array
      video.start();
    
      key='1';
      keyReleased();  //Update sketch title
    }
    
    void draw()
    { 
      background(255);
      if (state==IDLE) {
        image(video, 0, 0);
        return;
    
      } else if (state==LOAD_SHOTS_FROM_CAM) {
        text("TAKING LONG SHOT EXPOSURE.\n\n PLEASE WAIT\n\n Acquiring: "+pointer*1.0/N+"%", width/2, height/2);
        frames[pointer++] = video.get();
        if (pointer==N)
          state=IDLE;
    
      } else if (state==SHOW_INDIVIDUAL_SHOTS) {
        background(0,255,0);
        frame_mouse = int(map(mouseX, 0, width, 0, N));
        text("IMAGE NOT AVAILABLE=: "+frame_mouse, width/2, height/2);
    
        if (frames[frame_mouse]!=null) {
          image(frames[frame_mouse], 0, 0);
        }
    
      } else if (state==SHOW_LE_EFFECT) {
    
        //To be implemented...
        background(150,75,75);
      }
    
    }  //End draw()
    
    void captureEvent(Capture c) {
      c.read();
    }
    
    void keyReleased(){
    
      if(key=='1'){
        state=IDLE;
        surface.setTitle("Idle");
      }
      else if(key=='2'){
        state=LOAD_SHOTS_FROM_CAM;
        surface.setTitle("Capturing...");
        pointer=0;
      }
      else if(key=='3'){
        state=SHOW_INDIVIDUAL_SHOTS;
        surface.setTitle("Photo browsing");
      }
      else if(key=='4'){
        state=SHOW_LE_EFFECT;
        surface.setTitle("Long exposure shot");
      }
    
    }
    
  • Hey kfrajer!

    Thanks for your response! The selection of each state using the keyboard works great!

    I have implemented the "SHOW LE EFFECT" function as shown in the code below. There's an issue on the code, which could be done to make it more "elegant". I tried to implement the command using the loop

    image(frames[pointer++], 0, 0);
    tint(255, 100);
    

    the same way you do here:

    else if (state==LOAD_SHOTS_FROM_CAM) {
        text("TAKING LONG SHOT EXPOSURE.\n\n PLEASE WAIT\n\n Acquiring: "+pointer*1.0/N+"%", width/2, height/2);
        frames[pointer++] = video.get();
        if (pointer==N)
          state=IDLE;
    

    ... but it didn't produced the same image as when done all commands one by one (quite weird, but that's what i've noticed).

    Besides, I noticed that it still seems that the resulting image shows mostly the last images captured. It seems that the first images don't have the same weigh as the last ones even though all of them receive the same "tint(255, opac);" command.

    Any ideas?

    Thank you sooo much for your help!!

    F.

    //Processing Forum - Jan 22,2018
    //Summary: Long exposure photo booth (Partial implementation)
    //REFERENCES: <a href="https://forum.processing.org/two/discussion/25904/simulating-long-exposure-shots-on-processing#latest" target="_blank" rel="nofollow">https://forum.processing.org/two/discussion/25904/simulating-long-exposure-shots-on-processing#latest</a>;
    //Done by Kf
    //
    //New iteration done by ferkrum: Implemented Overlaying for long exposure. ISSUE: should be done using a loop instead of writing command one by one. More info on the comments on the code below.
    
    
    //INSTRUCTIONS:
    //         *-- Photo booth with 4 options
    //         *-- Press 1 to show live video stream
    //         *-- Press 2 to capture long shot. After operation is completed, back to idle state
    //         *-- Press 3 to browse collected pictures based on mouse position
    //         *-- Press 4 to show long exposure: Overlying all capture images. 
    //         *--        [This operation needs to be implemented]
    
    //===========================================================================
    // IMPORTS:
    import processing.video.*;   // import processing video library
    
    //===========================================================================
    // FINAL FIELDS:
    final int N =20;
    final int IDLE=0;
    final int LOAD_SHOTS_FROM_CAM=1;
    final int SHOW_INDIVIDUAL_SHOTS=2;
    final int SHOW_LE_EFFECT=3;  //LE=>Long exposure effect
    
    //===========================================================================
    // GLOBAL VARIABLES:
    Capture video;            // video object
    PImage frames[];          // declares frames[] array
    
    int pointer = 0;
    PImage tempImage;
    int frame_mouse = 0;
    int state=IDLE;
    int opac = 125;
    
    //===========================================================================
    // PROCESSING DEFAULT FUNCTIONS:
    
    void setup()
    {
      frameRate(10);
      size (320, 240);
      stroke(0);
      textSize(14);
      textAlign(CENTER, CENTER);
    
      video = new Capture(this, width, height);
      frames = new PImage[N];   // frames is an image array
      video.start();
    
      key='1';
      keyReleased();  //Update   sketch title
    }
    
    void draw()
    { 
      background(255);
      if (state==IDLE) {
        image(video, 0, 0);
        tint(255, 255);
        return;
      } else if (state==LOAD_SHOTS_FROM_CAM) {
        text("TAKING LONG SHOT EXPOSURE.\n\n PLEASE WAIT\n\n Acquiring: "+pointer*1.0/N+"%", width/2, height/2);
        frames[pointer++] = video.get();
        println(pointer);
        if (pointer==N) {
          key='1';
          keyReleased();
          state=IDLE;
        }
      } else if (state==SHOW_INDIVIDUAL_SHOTS) {
        background(0, 255, 0);
        frame_mouse = int(map(mouseX, 0, width, 0, N));
        println(frame_mouse);
        text("IMAGE NOT AVAILABLE=: "+frame_mouse, width/2, height/2);
    
        if (frames[frame_mouse]!=null) {
          image(frames[frame_mouse], 0, 0);
          tint(255, 255);
        }
      } else if (state==SHOW_LE_EFFECT) {
    
        //To be implemented...
        //background(150, 75, 75);
        //println("long exposure");
    
        if (pointer == N) {
          println("pointer= N");
          delay(5000);
          background(0);
          key='1';
          keyReleased();
          state=IDLE;
          //return;
        } else {
    
          int temp = pointer++;
          println("image(frames[" + temp + "], 0, 0);"); // did this to show that the message is being properly created for the loop
    
          //image(frames[pointer++], 0, 0); // ISSUE: this loop doesn't produce the same image as when written all elements one by one as shown below
          //tint(255, 100);
    
          // /*
          background(0);
          image(frames[0], 0, 0);
          tint(255, opac);
          blendMode(BLEND);
          image(frames[1], 0, 0);
          tint(255, opac);
          image(frames[2], 0, 0);
          tint(255, opac);
          image(frames[3], 0, 0);
          tint(255, opac);
          image(frames[4], 0, 0);
          tint(255, opac);
          image(frames[5], 0, 0);
          tint(255, opac);
          image(frames[6], 0, 0);
          tint(255, opac);
          image(frames[7], 0, 0);
          tint(255, opac);
          image(frames[8], 0, 0);
          tint(255, opac);
          image(frames[9], 0, 0);
          tint(255, opac);
          image(frames[10], 0, 0);
          tint(255, opac);
          image(frames[11], 0, 0);
          tint(255, opac);
          image(frames[12], 0, 0);
          tint(255, opac);
          image(frames[13], 0, 0);
          tint(255, opac);
          image(frames[14], 0, 0);
          tint(255, opac);
          image(frames[15], 0, 0);
          tint(255, opac);
          image(frames[16], 0, 0);
          tint(255, opac);
          image(frames[17], 0, 0);
          tint(255, opac);
          image(frames[18], 0, 0);
          tint(255, opac);
          image(frames[19], 0, 0);
          tint(255, opac);
          // */
    
        }
      }
    }  //End draw()
    
    void captureEvent(Capture c) {
      c.read();
    }
    
    void keyReleased() {
    
      if (key=='1') {
        state=IDLE;
        surface.setTitle("Idle");
      } else if (key=='2') {
        state=LOAD_SHOTS_FROM_CAM;
        surface.setTitle("Capturing...");
        pointer=0;
      } else if (key=='3') {
        state=SHOW_INDIVIDUAL_SHOTS;
        surface.setTitle("Photo browsing");
      } else if (key=='4') {
        state=SHOW_LE_EFFECT;
        surface.setTitle("Long exposure shot");
        pointer=0;
      }
    }
    
  • edited January 2018 Answer ✓

    This is my attempt.

    Kf

    //Processing Forum - Jan 22, 2018
    //Summary: Long exposure photo booth (Partial implementation)
    //REFERENCES: https:// forum.processing.org/two/discussion/25904/simulating-long-exposure-shots-on-processing#latest
    //Done by Kf, ferkrum
    //
    //New iteration done by ferkrum: Implemented Overlaying for long exposure. ISSUE: should be done using a loop instead of writing command one by one. More info on the comments on the code below.
    
    
    
    //INSTRUCTIONS:
    //         *-- Photo booth with 4 options
    //         *-- Press 1 to show live video stream
    //         *-- Press 2 to capture long shot. After operation is completed, back to idle state
    //         *-- Press 3 to browse collected pictures based on mouse position
    //         *-- Press 4 to show long exposure: Overlying all capture images. 
    
    //===========================================================================
    // IMPORTS:
    import processing.video.*;   // import processing video library
    
    //===========================================================================
    // FINAL FIELDS:
    final int N =20;
    final int IDLE=0;
    final int LOAD_SHOTS_FROM_CAM=1;
    final int SHOW_INDIVIDUAL_SHOTS=2;
    final int SHOW_LE_EFFECT=3;  //LE=>Long exposure effect
    
    //===========================================================================
    // GLOBAL VARIABLES:
    Capture video;            // video object
    PImage frames[];          // declares frames[] array
    
    int pointer = 0;
    PImage tempImage;
    int frame_mouse = 0;
    int state=IDLE;
    int opac = 255/N;
    
    //===========================================================================
    // PROCESSING DEFAULT FUNCTIONS:
    
    void setup()
    {
      frameRate(10);
      size (320, 240);
      noStroke();
      textSize(14);
      textAlign(CENTER, CENTER);
    
      video = new Capture(this, width, height);
      frames = new PImage[N];   // frames is an image array
      video.start();
    
      key='1';
      keyReleased();  //Update   sketch title
    }
    
    void draw()
    { 
      if (state==IDLE) {
        image(video, 0, 0);
    
        return;
      } else if (state==LOAD_SHOTS_FROM_CAM) {
        //text("TAKING LONG SHOT EXPOSURE.\n\n PLEASE WAIT\n\n Acquiring: "+pointer*1.0/N+"%", width/2, height/2);
        background(0);
        tint(255, opac);
        image(video, 0, 0);
        //NEXT two lines for debugging
        //============================
        //fill(random(255), random(255), random(255), opac);
        //ellipse(random(width), random(height), 50, 30);
        frames[pointer++] = get();
        println(pointer);
    
        if (pointer==N) {    
          noTint();
          key='1';  //Set to idle
          keyReleased();
        }
      } else if (state==SHOW_INDIVIDUAL_SHOTS) {
        background(0, 255, 0);
        frame_mouse = int(map(mouseX, 0, width, 0, N));
        println(frame_mouse);
        text("IMAGE NOT AVAILABLE=: "+frame_mouse, width/2, height/2);
    
        if (frames[frame_mouse]!=null) {
          PImage tmp=frames[frame_mouse].get();
    
          image(normImage(frames[frame_mouse], N), 0, 0);
          //filter(OPAQUE);
        }
      } else if (state==SHOW_LE_EFFECT) {
    
        if (pointer == N) {
          println("pointer= N");
          noLoop(); //Freezes current sketch. To continue, press 1 to set IDLE mode
          surface.setTitle("On standby");
        } else {
    
          int temp = pointer++;
          println("image(frames[" + temp + "], 0, 0);"); // did this to show that the message is being properly created for the loop
    
          //image(frames[pointer++], 0, 0); // ISSUE: this loop doesn't produce the same image as when written all elements one by one as shown below
          //tint(255, 100);
    
    
          background(0);
          blendMode(ADD);
          for (int i=0; i<N; i++) {
            image(frames[i], 0, 0);
          }
        }
      }
    }  //End draw()
    
    void captureEvent(Capture c) {
      c.read();
    }
    
    void keyReleased() {
    
      loop();
      blendMode(BLEND); //To undo blend mode changed in SHOW_LE_EFFECT
    
      if (key=='1') {
        state=IDLE;
        surface.setTitle("Idle");
      } else if (key=='2') {
        state=LOAD_SHOTS_FROM_CAM;
        surface.setTitle("Capturing...");
        pointer=0;
      } else if (key=='3') {
        state=SHOW_INDIVIDUAL_SHOTS;
        surface.setTitle("Photo browsing");
      } else if (key=='4') {
        state=SHOW_LE_EFFECT;
        surface.setTitle("Long exposure shot");
        pointer=0;
      }
    }
    
    /**
     * This function amplifies each ARGB by the norm value
     * Opposite effect of applying tint to an image.
     * only use this function on an image that has been tinted.
     * If image is tinted (transparent channel) by opac=255/N,
     * then this function multiplies each color channel by N to 
     * recover original color. Image quality of resultant image
     * will always experience some degradation compared to
     * original image.
     */
    
    PImage normImage(PImage img, float normVal) {
    
      PImage ret=img.get();
      ret.loadPixels();
      for (int i=0; i<ret.width*ret.height; i++) {
        int red=ret.pixels[i] >>16 & 0xff;
        int green=ret.pixels[i] >>8 & 0xff;
        int blue=ret.pixels[i] & 0xff;
        int a=ret.pixels[i] >>24 & 0xff;
        ret.pixels[i] =color(red*normVal, green*normVal, blue*normVal, a*normVal);
      }
      ret.updatePixels();
    
      return (ret);
    }
    
  • Hello Kf! Thanks for your attempt! Image quality is a key feature on what I'm trying to do... I noticed what you mean by image degradation. I'll try to keep using the same technique I used before. Maintaining 10 shots is still working fine.

    I'm focusing on changing screens right now, moving from one stage to the other. Will keep you posted as I move on.

    Thank you so much for your help! ^:)^

    F.

Sign In or Register to comment.