I'm doing a quick prototyping to check if Processing can do a motion detection via frame differencing and use the changing value to determine a movie playback speed. I stitched the FrameDifferencing and Speed examples from GSVideo example in Processing 2.0a8. The code is as follows:
/**
* GSVideo movie speed example.
*
* Use the Movie.speed() method to change
* the playback speed.
*
*/
import processing.video.*;
Movie movie;
int numPixels;
int[] previousFrame;
Capture video;
int movementSum;
int newMovementSum;
void setup() {
size(320, 240);
background(0);
movie = new Movie(this, "balloon.ogg");
movie.loop();
PFont font = loadFont("DejaVuSans-24.vlw");
textFont(font, 24);
video = new Capture(this, width, height);
video.start();
numPixels = video.width * video.height;
// Create an array to store the previously captured frame
previousFrame = new int[numPixels];
loadPixels();
newMovementSum = 1;
}
void movieEvent(Movie movie) {
movie.read();
}
void draw() {
if (video.available()) {
// When using video to manipulate the screen, use video.available() and
// video.read() inside the draw() method so that it's safe to draw to the screen
video.read(); // Read the new frame from the camera
video.loadPixels(); // Make its pixels[] array available
movementSum = 0; // Amount of movement in the frame
for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
color currColor = video.pixels[i];
color prevColor = previousFrame[i];
// Extract the red, green, and blue components from current pixel
int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
int currG = (currColor >> 8) & 0xFF;
int currB = currColor & 0xFF;
// Extract red, green, and blue components from previous pixel
int prevR = (prevColor >> 16) & 0xFF;
int prevG = (prevColor >> 8) & 0xFF;
int prevB = prevColor & 0xFF;
// Compute the difference of the red, green, and blue values
int diffR = abs(currR - prevR);
int diffG = abs(currG - prevG);
int diffB = abs(currB - prevB);
// Add these differences to the running tally
movementSum += diffR + diffG + diffB;
//newMovementSum = movementSum/1000;
// Render the difference image to the screen
pixels[i] = color(diffR, diffG, diffB);
// The following line is much faster, but more confusing to read
hi, I'm trying to do blob tracking using multiple kinects. it runs fine. I draw images from each Kinect side by side, I applied opencv.threshold() on each, and everything's fine.
however, I figured that when I do blob detection, the blobs are overlapped on the image from the first Kinect. it seems that the blobs are drawn only in (0,0) coordinate. whereas, I want the blobs to be drawn for each images. is there any way to solve this?
here's the snippet of the code that I use
for (int i = 0; i < device_count; i++) {
assignPixels( depth_frame_[i], kinect_depth_[i]);
opencv[i].copy(depth_frame_[i]);
opencv[i].threshold(100);
image (opencv[i].image(), 640*i, 0); //no problem up to this point
i'd like to make an ellipse with a gradient stroke that fades from white to pure transparent, similar to this
it's taken from a cover of The Mars Volta's latest album, Noctorniquet, and I really like that subtle circles. So I'm looking to recreate it in Processing. Anybody knows how to do it? I tried to hack the radial gradient example but so far I'm failed because those gradients were made to do a fill and not a stroke, per se.
I hope I'm not asking in the wrong section here :D Anyway, I'm looking at how to stream or getting a feed from camera via USB. I have a Sony HDR CX-100 lying here in my room and I wonder can I use it as a video input using JMyron library? I understand that I can use that library with firewire port, however there are no firewire ports in both my camera and my Macbook. So, is this possible? Or do I need to add something like a USB capture card or some sort?
I'm a bit ashamed that I have to ask this question, because this seems to be an easy work, however I've been having some issues with using TUIO cursor to toggle on/off button in Processing. I modified the ImageButton example to come up with grid of images as button, controlled by tuio cursor. I just want the tuio cursor to toggle the grid of buttons according to its position. Everything works, except for the fact that sometimes, the buttons just refused to react to the input given by the tuio cursor. I'm suspecting a simple logic fallacy here, but here's the code:
import TUIO.*;
TuioProcessing tuioClient;
ImageButtons[][] buttons = new ImageButtons[20][20];
void setup()
{
size(400, 600);
background(102, 102, 102);
// Define and create image button
PImage b = loadImage("butt-off-resize.jpg");
PImage d = loadImage("butt-on-resize.jpg");
int x = width/2 - b.width/2;
int y = height/2 - b.height/2;
int w = b.width;
int h = b.height;
for (int i = 0; i <20; i++) {
for (int j = 0; j <20; j++) {
buttons[i][j] = new ImageButtons (i*w, j*h, w, h, b, d);
I was playing around with the Open Kinect library, and it was awesome, really makes it easy to do background substraction. However, I was trying to combine the depth information from the Kinect and pass it to OpenCV using the following code
Is there anyway to count unique element in arraylist? I have a number of elements inside an arraylist and I want to count the frequency of each element's appearance. I understand that there's a method such as frequency for Java, but I don't know how to do this natively in Processing.
I'm working on a school project, in which I want to detect and track a company logo in a business card using a webcam. The question is pretty much basic, I'm kinda lost what feature I should use? I mean i've tried the face detection on open cv library, but it always ended up tracking my face. I've also tried blob detection, but I don't know how to keep on tracking that logo only, because it appears that it detects all of the shape that it founded.
Sorry if this question seems basic, because I'm pretty much lost in this computer vision field.
does anyone know how to create a multiple line of text? I know how to create a canvas and how to start type on it, but the problem is, after the string's width is bigger than the canvas' width, I want to automatically set the typing cursor to be below the current typed word. I understand that I might be able to achieve this using text field, but I'm not that aesthetically pleased with having a text field on my canvas. So is there any solution?