Change pixel color to fill in an irregular shape

edited November 5 in Library Questions

Hello, I am using the BlobDetection library and want to use the outlines created by the edge detection to form shapes so that I can fill in the shapes. So far the solution I have come up with is to draw lines coming from the pixels that create the contour. However, the lines go all the way to the bottom so if you raise your arm for example, the line will not stop at the bottom of your arm where the contour is but continue to the bottom of the screen. So I am looking for a way to fill the contours with color, and ideally at the end I will remove the contour line to be left with just a shadow.

(sorry if the solutions I came up with are rudimentary, quite new at programming!) (Please note - the image will appear at its best when you are in front of a white background in a well-lit space. To change lighting thresholds to adjust, go to line #31 " theBlobDetection.setThreshold(.6f); // will detect bright areas whose luminosity > 0.2f;" )

here is my code so far:

import processing.video.*;
import blobDetection.*;
color black = color(200, 255, 250);
Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;

void setup()
{  

  fullScreen();
  cam = new Capture(this, 40*4, 30*4, 15);
  cam.start();      


  img = new PImage(300, 60); 
  theBlobDetection = new BlobDetection(img.width, img.height);
  theBlobDetection.setPosDiscrimination(true);
  theBlobDetection.setThreshold(.6f); // will detect bright areas whose luminosity > 0.2f;
}

void captureEvent(Capture cam)
{
  cam.read();
  newFrame = true;
}

void draw() {
  {   
    if (newFrame) {
      {
        background(50);
        newFrame=true;
        img.copy(cam, 0, 0, cam.width, cam.height, 
          0, 0, img.width, img.height);

        for (int i = 0; i < 200000; i++) {
          float x = random(width);
          float y = random(height);
          fill(255, 255, 255);
          text(".", x, y);
          frameRate(100);
        }
        if (frameCount % 10 == 0) println(frameRate);
        fastblur(img, 2);
        theBlobDetection.computeBlobs(img.pixels);
        drawBlobsAndEdges(true, true);
      }
    }
  }
}
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
  Blob b;
  EdgeVertex eA, eB;
  for (int n=0; n<theBlobDetection.getBlobNb(); n++)
  {
    b=theBlobDetection.getBlob(n);
    if (b!=null)
    {


      if (drawEdges)
      {
        for (int m=0; m<b.getEdgeNb(); m++)
        { 
          eA = b.getEdgeVertexA(m);
          eB = b.getEdgeVertexB(m);
          if (eA !=null && eB !=null) {   

            // inner lines
            stroke(255, 0, 0);
            strokeWeight(2);
            smooth();
            line(eA.x*width-10, eA.y*height-10, 
              eB.x*width-10, eB.y*height-10
              );


            stroke (0);
            strokeWeight(14);
            smooth();
            line(eA.x*width, eA.y*height, 
              eB.x*width, eB.y*height
              );
          }

          strokeWeight(10);
          strokeCap(ROUND);  
          stroke(0);
          line(eA.x*width, eB.y*height+10, eA.x*width, eB.y*height - -height);
        }
      }
    }
    if (drawBlobs)
    {
      strokeWeight(10);
      stroke(255);
    }
  }
}


// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann 
// <http://incubator.quasimondo.com>;
// ==================================================
void fastblur(PImage img, int radius)
{
  if (radius<1) {
    return;
  }
  int w=img.width;
  int h=img.height;
  int wm=w-1;
  int hm=h-1;
  int wh=w*h;
  int div=radius+radius+1;
  int r[]=new int[wh];
  int g[]=new int[wh];
  int b[]=new int[wh];
  int rsum, gsum, bsum, x, y, i, p, p1, p2, yp, yi, yw;
  int vmin[] = new int[max(w, h)];
  int vmax[] = new int[max(w, h)];
  int[] pix=img.pixels;
  int dv[]=new int[256*div];
  for (i=0; i<256*div; i++) {
    dv[i]=(i/div);
  }

  yw=yi=0;

  for (y=0; y<h; y++) {
    rsum=gsum=bsum=0;
    for (i=-radius; i<=radius; i++) {
      p=pix[yi+min(wm, max(i, 0))];
      rsum+=(p & 0xff0000)>>16;
      gsum+=(p & 0x00ff00)>>8;
      bsum+= p & 0x0000ff;
    }
    for (x=0; x<w; x++) {

      r[yi]=dv[rsum];
      g[yi]=dv[gsum];
      b[yi]=dv[bsum];

      if (y==0) {
        vmin[x]=min(x+radius+1, wm);
        vmax[x]=max(x-radius, 0);
      }
      p1=pix[yw+vmin[x]];
      p2=pix[yw+vmax[x]];

      rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
      gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
      bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
      yi++;
    }
    yw+=w;
  }

  for (x=0; x<w; x++) {
    rsum=gsum=bsum=0;
    yp=-radius*w;
    for (i=-radius; i<=radius; i++) {
      yi=max(0, yp)+x;
      rsum+=r[yi];
      gsum+=g[yi];
      bsum+=b[yi];
      yp+=w;
    }
    yi=x;
    for (y=0; y<h; y++) {
      pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
      if (x==0) {
        vmin[y]=min(y+radius+1, hm)*w;
        vmax[y]=max(y-radius, 0)*w;
      }
      p1=x+vmin[y];
      p2=x+vmax[y];

      rsum+=r[p1]-r[p2];
      gsum+=g[p1]-g[p2];
      bsum+=b[p1]-b[p2];

      yi+=w;
    }
  }
}

Answers

  • edit post, highlight code, press ctrl-o to format.

  • quite new at javascript!

    that's not javascript.

  • As @koogs indicated, that is Processing(Java), not p5.js or Processing.js (JavaScript)

    Doesn't the blob already tell you all the pixels in the blob, before you take its contour? Could you use masking? It depends what you are trying to "fill" with -- a color, an image, a drawing...?

  • edited November 6

    @jeremydouglass Where does it tell you all the pixels in the blob? If I could find that I could try masking. Thanks.

    I am trying to fill the blobs with color.

  • edited November 6

    Apologies -- this is not a Kinect-based example, so there isn't a raw pixels step.

    If you have the center of mass and the contour then you could use a four-way floodfill or a scanline floodfill. Search the forum for "floodfill"; there are several examples with full code.

    For example:

  • @jeremydouglass Thank you, I will try that.

  • Seems like you could rewrite your drawBlobsAndEdges(true, true) function to draw a shape using beginShape() and vertex(x,y) and fill that shape? I recoded some blob detection functions to that once and even used beginContour() to include negative shapes.

  • @jeremydouglass I can only seem to be able to make the pixels trace the perimeter...I can't locate the center of the blobs.

  • edited November 9

    When I look in the /src of the BlobDetection library, under the first file, Blob.java ....

    package blobDetection;
    //======================================
    //class Blob
    //======================================
    public class Blob
    {
        public BlobDetection parent;
        public int      id;
        public float    x,y;   // position of its center
        public float    w,h;   // width & height
        public float    xMin, xMax, yMin, yMax;
    

    ...it looks like every Blob has a public Blob.x / Blob.y that gives the position of its center?

  • @Bird's idea is also a very good one -- perhaps you could load Blob.lines[] into vertices?

  • edited November 9

    @jeremydouglass oh I see ! I didn't think to look there. I will try to use that for a floodFill! thanks.

  • I can't seem to be able to use the global variables x,y, w, and h from the BlobDetection library. I've looked up how to do this and I can't quite get it right. What would be the proper syntax to be able to use those in this code?

    I tried Blobx = new Blob(); as well as a couple of other. Even after reading documentation about how to call public variables I am left a bit confused.

  • edited November 9

    See how your code contains a Blob b? Every Blob has a global variable x and y. Once you have created and updated Blob b, you would access those variables like this:

    println( b.x );
    println( b.y );
    

    Blobx is definitely not right -- I would strongly recommend reading the Objects & Classes tutorial so that you don't have to guess wildly about what will work. It will save you a lot of time in the long run.

  • edited November 12

    @jeremydouglass that tutorial was super helpful thank you. So I understand now how to use b.x and b.y to locate the center of the blob. Now I am trying to fill the blob using point( ).

    As I understand it, eA.x and eA.y are the edge points. So I am trying to write that if i is greater than the center point and less than the edge, draw points.

    But maybe the < > are not the right operators to use?

        **     for (int i = 1; (i > 0) && (i < eA.x); i++) {
                        for (int j = 1; (j > 0) && (j < eA.y ); j++) {
                          stroke (255, 0, 0);
                          strokeWeight (20);
                          point(i, j);
                        }
        **
    
  • edited November 13

    Re:

    As I understand it, eA.x and eA.y are the edge points

    That is not what the code I linked to and posted said in the comments!

    public float    x,y;   // position of its center
    public float    w,h;   // width & height
    public float    xMin, xMax, yMin, yMax;
    

    From this, I would guess (untested) that eA.x is the center and eA.x + eA.w/2 is the right edge, and eA.x - eA.w/2 is the left edge. eA.xMax / eA.xMin might also be the right and left edges, I haven't checked.

  • edited November 13

    I've tried to use Daniel Shiffman's videos on computer vision, in particular a technique called clamping. You are supposed to compare the distance between the edge and the center and the other edge and the center. Here is the link to the first video... https://www.youtube.com/watch?v=QLHMtE5XsMs

    here is a bit of the code I am writing to attempt to use this technique in order to fill the blob with color.

          float eA.x = max(min(x,eA.x),xMin);
                    float eA.y = max(min(y,eA.y),yMin);
                    float d = distSq(b.x,b.y,eA.x,eA.y);
    
                    if (d < distThreshold*distThreshold){
                      return true;
                    }else{
                      return false;
                    }
                    }
    

    if anyone has some better idea of how to do this I would love to hear about it!

  • edited November 13

    Another issue with the BlobDetection library is the image is flipped. I want it to mirror rather than flip. I tried this code :

              scale(-1, 1);
              image(camera, 0, 0, -width, height);
    

    the video output flipped, but the blobs would no longer appear, as if this video plays over everything else. Maybe I put it in the wrong order? Or maybe there is another bit of code to use?

  • edited November 13

    I think I may be close with this :

               loadPixels();
               color R =  color (255,0,0);   
               for (int x= 0; x < width; x++){
                 for (int y = 0; y < height; y++){
                  if  (pixels[x+y*width] == R) {
                  pixels[x+y*width] = R;
                  } else if(x == R){
                    //pixels[x+y*width] =
    
                  }
                 }
             updatePixels();
    

    What I want the code to do is : if a pixel is red, make the pixel to its right(or beneath it) red also, but if it encounters another red pixel after that, stop changing the pixel's color.

  • Are you placing your scale() call above between pushMatrix/popMatrix() calls?

    https://processing.org/reference/pushMatrix_.html

    Kf

  • edited November 14

    @kfrajer I have tried that as well. I have tried putting it in different places too. But I think the problem is because the cam makes a copy of the video which the blobs are detected on. So the only way it works is by reversing the video behind, but then the video where the blobs are does not flip. here is the code :

          pushMatrix();
          scale(-1,1);
          image(cam, 0,0,-width, height);
          popMatrix();
    

    Ive tried putting it in the void setup, the void capture event and the void draw...

Sign In or Register to comment.