FAQ
Cover
This is the archive Discourse for the Processing (ALPHA) software.
Please visit the new Processing forum for current information.

   Processing 1.0 _ALPHA_
   Topics & Contributions
   Information Visualization
(Moderators: forkinsocket, REAS)
   Image Morphing
« Previous topic | Next topic »

Pages: 1 
   Author  Topic: Image Morphing  (Read 6805 times)
winston_smith


Image Morphing
« on: Aug 7th, 2004, 1:27am »

After a fairly extensive search, I have yet to find any significant examples of image morphing in java/p5.  
 
The laundry list of issues:
1) I need a way to accurately find facial features w/edge detection. I'm using the JMyron motion tracking library (http://webcamxtra.sourceforge.net/) to do this. Should I use something else? Simpler edge detection (like Reas' highpass filter example) and a custom method for identifying facial features, perhaps?
 
By 'hijacking' single frames from the webcam, and telling the JMyron object to return an array of points describing the bounding boxes for each object, I hope to be able to retrieve boxes for only the eyes, nose, and mouth using some sort of template...  by doing this, I should get bounding box points that correlate to each other accross two different facial images.  
 
With these points, I need to 2) warp the second image such that each point is mapped to its corresponding point in the first image (actually, I may have to warp both images such that the bounding box areas are sort of averaged together to avoid excess distortion).  
 
For this I am completely ignorant of any appropriate methods... I've seen plenty of mathematical examples that are over my head. Looking around for available image processing libraries yielded the javax.media.jai package (http://java.sun.com/products/java-media/jai/forDevelopers/jai-apidocs/ja vax/media/jai/package-summary.html), but I'm unsure of how exactly to use it correctly (or how to use it inside processing).
 
Once the features have been detected and the images warped to match each other, I need to 3) blend them together correctly. I don't know if this is a stupid question or not, but what would be the best method for compositing the two images? Just a 50% blend? After a few iterations of this, information from the first pictures gets wiped out where there's too much white/black and I'm unsure if it really matters or not... (oh yeah, images are to be kept black and white).
 
<whew>
 
In the end, the sketch should allow the user to take a self-portrait from the webcam, automatically find his/her facial features, match them to those of a referential blend (master) image, warp/composite the images, then finally store the result to the master image. With each user submission, the resulting composite human will evolve...
 
If anyone knows of any good resources/examples for any of the above, I'd be stoked to know about them. This is a research project associated with one of my instructors long-term collaborative projects, and I'd REALLY like to do a good job (explicit credit for anyone who has useful code suggestions!).
 
Cheers for reading this entire post!
winston
 
v3ga

WWW Email
Re: Image Morphing
« Reply #1 on: Aug 7th, 2004, 1:49am »

Hello,
 concerning 1), you can take a look at http://processing.org/discourse/yabb/board_VideoCamera_action_d_isplay_num_1088589126.html , I've posted an example of edge detection. May be it could help you.
 

http://v3ga.net
winston_smith


Re: Image Morphing
« Reply #2 on: Aug 7th, 2004, 2:43am »

Snippets of code (you will need the JMyron package, linked above)...
 
Created my own BlendImage class as follows. This class represents the final composite image and provides its own methods for adding new images to it. I realize this is rather unnecessary, but hey, it's working alright thus far (I fear change). In time, this whole thing will be reorganized for efficiency and such....
 
Code:

class BlendImage {
   
  public BImage img;
   
  public BlendImage() { img = new BImage(); }
  public BlendImage(BImage img) { this.img = img; }
   
  void add( BImage nextImage ) {
    // blend the warped image into the internal 'blend image' (img).
    img = blend( warp( nextImage ) );
  }
   
  // warp 'next_points' to match 'my_points'
  BImage warp( BImage nextImage ) {
    BImage warpImage = new BImage(nextImage.width, nextImage.height);
    /* Warping method?? */
    // call find_features on img and nextImage to calculate grid warping...
    // affect nextImage from results (using JAI library?)
    return nextImage; // return warpImage;
  }  
   
   
  BImage blend( BImage nextImage ) {
    BImage temp = new BImage(nextImage.width, nextImage.height);
    for ( int i = 0; i < nextImage.pixels.length; i++ ) {
 
 float h = .5 * (  hue(img.pixels[i]) + hue(nextImage.pixels[i])),
    s = .5 * (saturation(img.pixels[i]) + saturation(nextImage.pixels[i])),  
    b = .5 * (brightness(img.pixels[i]) + brightness(nextImage.pixels[i]));
 
 temp.pixels[i] = color(h,s,b);  
    }
    return temp;
  }
   
  // return bounding box points for facial features  
  int[][] find_features( BImage b ) {
    JMyron m = new JMyron();
    m.start(b.width, b.height);
    // hijack the library for use on image 'b'
    m.hijack( b.width, b.height, b.pixels );  
    m.findGlobs(1); // detect edges (globs)
    // track the color black, with threshold of 128
    m.trackColor( 0, 0, 0, 128 );
    // return array of rect coordinates for each glob
    // format: features[glob number][4]  
    // -> [glob num] { x, y, width, height }
    int[][] features = m.globBoxes();  
    m.stop();
    return features;    
  }  
}
 
winston_smith


Re: Image Morphing
« Reply #3 on: Aug 7th, 2004, 2:48am »

on Aug 7th, 2004, 1:49am, v3ga wrote:
Hello,
 concerning 1), you can take a look at http://processing.org/discourse/yabb/board_VideoCamera_action_d_isplay_num_1088589126.html , I've posted an example of edge detection. May be it could help you.

 
Excellent. JMyron is proving difficult to use properly... this could definitely be useful. thanks
 
winston_smith


Re: Image Morphing
« Reply #4 on: Aug 21st, 2004, 6:05am »

OK, for anyone who gives a damn, here's an interesting lil problem:
 
Basically, I need to be able to take just one frame from video and somehow "normalize" the light in it (for lack of a more correct word). Uneven lighting screws up the edge/blob detection. Is there any way to somehow adjust individual brightness per pixel to make too-dark pixels lighter and too-light pixels darker such that the final image has an even distribution of light/dark?
 
-winston
 
Pages: 1 

« Previous topic | Next topic »