How do I send a downsampled webcam video's color pixels (as greyscale) through UDP?

So I'm ultimately trying to do this exactly: https://vimeo.com/80364336

I've talked to Adam about how he did it, and he's using openFrameworks. He said he takes video from his webcam - down-samples it to those big pixels - then sends the greyscale (0-255) array information through UDP to Cinema 4d, and then has a videoIn python script (as an effector) which takes that information and displays the shading on a matrix of cubes (1 for each pixel in his down-sampled array).

I'm using Processing 3.0 and he said it would work just the same if I got everything working right. Well I found code that I've gotten to where the output is very similar to his video-downsampled.

import processing.video.*;

// Size of each cell in the grid, ratio of window size to video size
//Screen Pixels are 80 width and 60 height in this case 640/480
//Note: 128 large-pixel width at 1024 and 72 big-pixels at 576 height
int videoScale = 8;
// Number of columns and rows in the system
int cols, rows;
// Variable to hold onto Capture object
Capture video;

void setup() {  
  size(640, 480);  
  // Initialize columns and rows  
  cols = width/videoScale;  
  rows = height/videoScale;  
  background(0);
  video = new Capture(this, cols, rows);
  video.start();
}

// Read image from the camera
void captureEvent(Capture video) {  
  video.read();
}

void draw() {
  video.loadPixels();  
  // Begin loop for columns  
  for (int i = 0; i < cols; i++) {    
    // Begin loop for rows    
    for (int j = 0; j < rows; j++) {      
      // Where are you, pixel-wise?      
      int x = i*videoScale;      
      int y = j*videoScale;
      color c = video.pixels[i + j*video.width];
      fill(c);   
      stroke(0);      
      rect(x, y, videoScale, videoScale);    
    }  
  }
}

The above code gets me basically in the ballpark as far as a video downsampled to a manageable array size for UDP transfer. I've been able to get a simple UDP message sent from Processing 3.0 to cinema 4d in a python tag, which as I advance each frame I get the message (which is looping in processing) each frame I move forward. So in theory I'm getting there.

import hypermedia.net.*;

int port = 20000;
String ip ="127.0.0.1";
String message =new String("Hello");
UDP udpTX;

void setup(){
udpTX=new UDP(this);
udpTX.log(true);
noLoop();
}

void draw(){
udpTX.send(message,ip,port);
delay(499);
loop();
}

With this UDP transfer the string "Hello" gets sent in a loop with a little less than a half second delay. To the question! How do I meld the two? Have the camera start showing the down-sampled video and then be sending the black and white "big" pixel color data through the UDP connection? I'm new to this, and so filling in the gaps is a big challenge but I'm trying! If I can't get help though I probably will have to chalk this one up to being over my head. Hopefully someone here is a genius who can help me. :D

Sign In or Register to comment.