How to replace each pixels from a video feed by an image ?

edited May 2017 in Library Questions

Hello Processing talented & helpful community :) I am Barbara, I am pretty new to Processing and I am working on a project that is giving me trouble right now. Figured maybe someone here would hear my call and help out !

So here is the deal : I have a pretty simple code, that basically displays the video from webcam feed, in pixels form. What I am trying to do is REPLACE each pixel by an image. Let's say I have 3 folders called : Blue Folder, Red Folder, Green Folder. In each folder I have 100 pictures. I want my code to randomly grab a picture in the folder in order to replace the pixel. So all red pixels will go grab pictures from the red folder. The results would look like a collage of hundreds of photos.

Do you think that is possible ? I have been on this for over 2 weeks and cannot figure it out :(

HERE IS MY BASIC CODE:

import processing.video.*;

int numPixels;
int blockSize = 10 ;
Capture video;
color collageColors[];

void setup() {
  size(displayWidth, displayHeight);
  noStroke();
  background(0);
  video = new Capture(this, displayWidth, displayHeight);
  numPixels = width / blockSize;
  collageColors = new color[numPixels * numPixels];
  video.start();
}

void captureEvent(Capture v) {
  v.read();
  v.loadPixels();

  for (int j = 0; j < numPixels; j++) {
    for (int i = 0; i < numPixels; i++) {
      collageColors[j*numPixels + i] = v.get(i*blockSize, j*blockSize);
    }
  }
}

void draw()  {

 if (video.available() == true) {
 video.read();
 image(video, 0, 0);
}

  for (int j = 0; j < numPixels; j++) {
    for (int i = 0; i < numPixels; i++) {
      fill(collageColors[j*numPixels + i]);
      rect(i*blockSize, j*blockSize, blockSize-1, blockSize-1);


    }
  }
}

Answers

  • Please edit your post (gear icon in the top right corner of your post), select your code and hit ctrl+o to format your code. Make sure there is an empty line above and below your code.

    You need to partition your video image first. No need to work with video to start. You can work with a PImage and switch to a video as a last step, as this transition would require just a handful of lines of code.

    This is relevant and might be useful (or at least, you can see it is doable): https://forum.processing.org/two/discussion/comment/97359/#Comment_97359

    There are different task that you need to accomplish:

    • Can you read the files from a folder and list them?
    • Can you load the different images? Notice, you have to load them in the final reduced form, otherwise you are going to have memory issues. Check the get() function with 4 parameters (Under the PImage) or use resize()
    • How many mini-images per images? Do you know how to partition your main image?
    • Are you familiar with imageMode? colorMode? Check the reference...

    Kf

  • wow thank you so much kfrajer for the cue to format the code in the forum. Really appreciate you taking the time to do it !

    For the code you shared, I don't quite understand the link with my project... In the code you shared, it's about displaying 1 video, multiple times. Could you maybe give me more details on how you think it could apply to what I am trying to do ? :)

    What I am trying to achieve is to replace all the pixels in a picture (or video), by small images. At first, I wanted it to come from a Google Image feed, but it seems to difficult. So I figured I would just make "image banks" for each Red, Green & Blue.

    Right now in my code, the video displaying from my webcam is "broken" into, let's say maybe, 1000 pixel squares. I need to find a way to have my code separate those pixels by Blue, Green, Red. And then for each category, randomly grab a picture from the attributed folder in order to replace it.

    The end result should look like the attached picture here.

    678820017

  • It IS possible to make a video out of thousands of pictures.

    However:

    1. local images -- you are right that it is not possible to replace video pixels with image drawn from a web service, live. You can't download thousands of pictures a second, every second -- and even if you could it would be a bad idea.

    2. bins -- you won't get results even close to your "end result" image by using three directories of images, because color doesn't work that way -- most video pixels and most images aren't red, green, or blue, they are some combination of all three, or almost none (grey-black) or all (white). You need to break your R,G,B evaluation for your image folders into a number of levels and then categorize your images to matching those bins. 27 bins with one image each would be the bare minimum, 64 bins with 4 images each (256 images) would start to give you better color effects.

      lvls    bins
      3   27  = 3 × 3 × 3
      4   64  = 4 × 4 × 4
      5   125 = 5 × 5 × 5
      6   216 = 6 × 6 × 6
      7   343 = 7 × 7 × 7
      8   512 = 8 × 8 × 8
      
    3. static -- if the component pictures for each pixel are changing every frame (say 16-24fps) then the whole thing will look like static, not a photo mosiac. It will only look impressively "picture-y" if they are more stable -- for example, if you check against the previous input video frame and only update parts of the screen that change. The bad news is this adds complexity; the good news is it could reduce screen drawing load.

    4. speed -- your demo sketch is extremely ambitious -- at 2560 x 1600 with 10x10 pixel images you want to draw 40,960 images each video frame. even if they are very tiny, even if they are preloaded, and even if you cut down on that number by only drawing the changed ones, that is a lot. It is possible you will also need to go much lower resolution, or get stronger hardware, or switch to a faster language (like C++). Unless you don't need this real-time. If you can pre-record the video and it is okay to convert it slower than realtime (then play it back at normal speed at the end), then speed matters less.

  • You can also stick to HSB color mode and base your concept on only hue or brightness. You don't have to start with fine granularity. You can go with course granularity and play with the random image selection, to see how fast a static pixelated image can change and check the load conditions when you start pushing toward fine granularity. Changing the granularity of your image should be implemented in your code so for you to see the effects of loading your computer with many elements.

    The link above was a crude example that you could use... maybe not too good of an example... One thing is that you I don't think you should interact with single pixels... you should consider working with mini square sections of your image. In this mini-square - cell sections, you can average the color to come up with a metric to use for your random image selection. Then you get an image, and replace the cell content with this random selected image.

    Kf

  • @kfrajer -- HSB colorMode is a really great suggestion!

    I would still suggest several "H" bands and at least 3 "S" and "B" bands. A random mix of unsaturated and bright/dark photos that are full of "green" but don't look green are the reason why people look at their code and say "this isn't working!" when it is working -- it just looks terrible. When you replace a pixel region with an image of the same hue, of the S+B aren't at all close then it looks wrong.

  • Really good point @jeremydouglass.

    Kf

  • Thank you very much @kfrajer and @jeremydouglass ! This is really helping me.

    I don't really need a video, I just need a self-portrait, using a webcam, that will result in a photo collage.

    So based on your comments, what I think I should do, is having the webcam get activated, there will be a counter (3,2,1...) and it will take 1 still image. This still image will the be displayed as a collage.

    To make the collage, I need to have enough pictures so that we see the "Big picture" correctly, but not too much otherwise my computer won't be able to do it (as you said).

    Thank you for underlining that the more bins are better for an accurate color effect. I agree with this totally, and I was definitely planning on working on the refining the color aspect of the project later on... Once I could at least do the basis... which is to replace pixels by pictures.

    Which I am not even able to do at this point, because I have no idea how :(

  • An interesting idea is this. For example, you need to come up with your metric concept for the pixels in your main image (The source image). Then, you don't need to divide your other pictures in your folder into subfolder. Let's call this pictures tiles so we agree in the names of the different components of your project. Let's assume you have 100 tiles. You can have 1000 tiles if you want. The key concept is that when you load them, you make sure you resize them so you get to manage the miniature version of your images at all times.

    Now, you don't need to divide your tiles into subfolders. Instead, keep them in the same folder. When you load the images and make them as tiles, you also do an averaging of the hue, brightness and saturation of each loaded image. You will end up with a list of tiles as PImages, and a set of H-S-B values describing each of this images.

    On the other hand, you take each pixel (or mini region) in your source and you measured their H,S,B values. Your algorithm them takes the the H,S,B of that pixel and search in the loaded set of tiles-HSB pairs for a tiles that matches this H-S-B values from your source image.

    The idea is to automate this process. Unless you want to classify your photo/pictures offline, or your pictures already are defined and mapped to a color scheme.

    Kf

Sign In or Register to comment.