Panorama using a webcam

Hello, first time here!

I'm trying to stitch images from frames of a webcam. Actually, I want to make a mosaic(or panorama) image, like shown on this videos:

https://youtu.be/aXYnAfZ4bD4?t=15s

https://youtube.com/watch?v=QapSxGnUWtY&t

I don't need to correct lenses, rotation or perspective. I only need to move the image in XY plane, like de first video. Even if I move only on X is ok for now.

So far, I was able to stitch 2 static images. But when I try to make this with a camera, something goes wrong :(

This is my code:

    import boofcv.processing.*;
    import boofcv.struct.image.*;
    import boofcv.struct.feature.*;
    import georegression.struct.point.*;
    import java.util.*;
    import processing.video.*;

    Capture video;

    PImage prevFrame;
    PImage current;

    int leu = 0;
    int c = 1;
    float avgX = 0;
    float avgY = 0;

    List<Point2D_F64> locations0, locations1;      // feature locations
    List<AssociatedIndex> matches;      // which features are matched together

    void setup() {

      size(600, 240);
      video = new Capture(this, 320, 240, 30);
      video.start();

      prevFrame = createImage(video.width, video.height, RGB);
      current = createImage(video.width, video.height, RGB);
    }
    void detectar() {
      SimpleDetectDescribePoint ddp = Boof.detectSurf(true, ImageDataType.F32);  //use SURF
      SimpleAssociateDescription assoc = Boof.associateGreedy(ddp, true);  

      // Find the features
      ddp.process(prevFrame);
      locations0 = ddp.getLocations();
      List<TupleDesc> descs0 = ddp.getDescriptions();

      ddp.process(current);
      locations1 = ddp.getLocations();
      List<TupleDesc> descs1 = ddp.getDescriptions();

      // associar os pontos
      assoc.associate(descs0, descs1);
      matches = assoc.getMatches();
    }

    void draw() {

      loadPixels();
      video.loadPixels();
      prevFrame.loadPixels();

      detectar();
      int count = 0;
      for ( AssociatedIndex i : matches ) {     
        if ( count++ % 20 != 0 )
          continue;
        else if ( count > 100)
        {       
          break;
        }

        Point2D_F64 p0 = locations0.get(i.src); 
        Point2D_F64 p1 = locations1.get(i.dst); 
        float diferencaX = abs((float) p0.x - (float)p1.x);
        float diferencaY = abs((float) p0.y - (float)p1.y);    

        if (leu < 30) {

          if (diferencaY < 15) {
            avgX = avgX + (diferencaX - avgX)/c;
            avgY = avgY + (diferencaY - avgY)/c;
            c++;  
          }
        }

        if ( leu == 29) {
          if (avgX > 300) {    
            translacao();
          }
        }
        leu++;
      }
    }

    void translacao() {
      image( prevFrame, 0, 0 );
      image( current, avgX, -avgY);
      updatePixels();
      println(avgX);
      c = leu = 0;
      avgX = avgY = 0;
    }

    void captureEvent(Capture video) {
      // Save previous frame for motion detection!!
      // Before we read the new frame, we always save the previous frame for comparison!
      if (avgX > 300) {
        prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); 
        prevFrame.updatePixels();  // Read image from the camera
      }
      video.read();
      current.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); 
      current.updatePixels();
    }

I appreciate any help! Thank you!

Answers

  • Few notes:

    1. You don't need to loadPixels unless you want to access the pixel array of each image. Only then you call updatePixels() before displaying changes with either set() or image(). In other words, you can safely remove any reference to loadPixels/updatePixels. Further info:
      https://processing.org/reference/loadPixels_.html https://processing.org/reference/updatePixels_.html

    2. You never call translacao() which loads your images.

    3. Consider checking the following examples from the video library:

      examples >> Capture >> SlitScan.pde
      examples >> Capture >> Spatiotemporal.pde
      examples >> Capture >> TimeDisplacement.pde

    Kf

  • Thank you for answer kfrajer!

    1. I don't understood very well. I don't need to loadpixels to find the points of interest? On my code, I need to measure the distance of POI, so I think I need to use the pixels for this.

    2. I called translacao() on line 80. My intention is to upload the image on screen only when the image move a minimum distance.

    3. I have used SlitScan as reference, but I will look this others examples too.

    Thank you for your time!

  • https://processing.org/reference/loadPixels_.html https://processing.org/reference/loadPixels_.html

    You need loadPixels when you are accessing your pixel array of your PImage object like this: img.pixels[idx] where idx is any of your pixels in your width x height image. However, a quick look at your code, you are using img.get() so as you see, you are not accessing the pixel array. Please refer to the reference link as they explain it there.

    Kf

  • Ok, I understood and made the modifications.

    Now the code is working. But the matches aren't so good and the program is slow.

    That's my new code:

    import boofcv.processing.*;
    import boofcv.struct.image.*;
    import boofcv.struct.feature.*;
    import georegression.struct.point.*;
    import java.util.*;
    import processing.video.*;
    
    Capture video;
    
    PImage prevFrame;
    PImage current;
    
    int leu = 0;
    int c = 1;
    float avgX = 0;
    float avgY = 0;
    boolean leitura1 = false;
    float armazena1 = 0;
    
    
    List<Point2D_F64> locations0, locations1;      // feature locations
    List<AssociatedIndex> matches;      // which features are matched together
    
    
    void setup() {
    
      size(1600, 800);
      video = new Capture(this, 1024, 768, "SparkoCam Video", 30);
      video.start();
    
      prevFrame = createImage(624, 668, RGB);
      current = createImage(624, 668, RGB);
    }
    void detectar() {
      SimpleDetectDescribePoint ddp = Boof.detectSurf(true, ImageDataType.F32);  //usar SURF
      SimpleAssociateDescription assoc = Boof.associateGreedy(ddp, true);  // busca interminavel
    
      // Find the features
      ddp.process(prevFrame);
      locations0 = ddp.getLocations();
      List<TupleDesc> descs0 = ddp.getDescriptions();
    
      ddp.process(current);
      locations1 = ddp.getLocations();
      List<TupleDesc> descs1 = ddp.getDescriptions();
    
      // associar os pontos
      assoc.associate(descs0, descs1);
      matches = assoc.getMatches();
    }
    
    void draw() {
      detectar();
      int count = 0;
      for ( AssociatedIndex i : matches  ) {     
        if ( count++ % 30 != 0 )
          continue;
        else if ( count > 700)
        {       
          break;
        }
        Point2D_F64 p0 = locations0.get(i.src); 
        Point2D_F64 p1 = locations1.get(i.dst); 
        float diferencaX = (float) p1.x - (float)p0.x;
        float diferencaY = (float) p1.y - (float)p0.y;    
    
        if (leu < 20) {
    
          if (diferencaY < 15) {
            avgX = avgX + (diferencaX - avgX)/c;
            avgY = avgY + (diferencaY - avgY)/c;
            //image(prevFrame,0,0);
            //image(current,320,0);
            c++;
          }
        }
    
        if ( leu == 19) {    
          translacao();
        }
        leu++;
      }
    }
    
    
    void translacao() {
      if (abs(avgX) > 20) {
        armazena1 = armazena1 - avgX;
        image( current, 640+armazena1, 180);
        prevFrame.copy(video, 200, 300, 824, 368, 0, 0, 320, 240);
      }
    
      println(avgX);
      c=1;
      leu = 0;
      avgX = avgY = 0;
    }
    
    void captureEvent(Capture video) {
    
      video.read();
      current.copy(video, 200, 300, 824, 368, 0, 0, 320, 240);
    
      if ( leitura1 == false) {
        prevFrame.copy(video, 200, 300, 824, 368, 0, 0, 320, 240); 
        leitura1 = true;
      }
    }
    
  • I run your code and nothing happens. Can you describe what your code does? Are you trying to stitch input camera frames side by side or are you actually stitching frames by color matching?

    Kf

  • Did you change line 28 for your webcam? Because I attached a DSRL camera on a microscope and made a virtual webcam with SparkoCam software.

    I'm trying to find keypoints of a sample and stitch with the next image. I need to make a panorama with images of microscope.

    What I want to do is showed on this video:

Sign In or Register to comment.