I am looking into a new language for making a vj program. A piece of software than can mix in separate channels of video, receive user input, add effects, and then output, etc.
I am using Quartz Composer now to make my video mixing programs. I was told it wouldn't work that good but I use it all the time to mix 3 video, live feed, and output to syphon at 60fps. Yet now, Quartz faces deprecation.
I know Processing can do all of these features individually, but, as video mixing is usually hardware intensive, is this the best environment to develop such a program? Is this language suitable for such a piece of software? I looked around and did not find any examples of someone using Processing to make such a program. I like the wide array of libraries and great documentation/community support plus being able to package and distribute is huge! Quartz fails at this miserably.
My other idea is to write it in something like Cinder or OpenFrameworks if Java is not meant to handle these kinds of interactions and computation.
How can I correctly wrap just the Blob parts of this sketch to send out via Syphon?
This sketch takes in the Kinect clean IR, filters it, and then draws Blobs on top. I want to send out just the Blobs via syphon.
At first I gathered that the blobs are not images but shapes, so you must collect them into one scene (PGraphics?) so that Syphon can take this feed and send it out. However I was only able to send out a blank syphon feed, it did not seem to correctly gather all of the blobs if I put a canvas.beginDraw(); above and a canvas.endDraw(); below the blob code.
I have been using a Kinect's IR feed to track blobs using OpenCV then doing fun things with the points and send that out through Syphon.
I was able to use the opencv.absDiff(); method to remove the background using the space bar using my webcam earlier. However, when I switched to using the Kinect's IR feed I was unable to perform the background subtraction. This is a mission critical feature, so I am trying to see how it can be done.
I got the IR feed into OpenCV through the opencv.copy(); method and I believe that is somehow the issue. I know in the CV reference sheet, you should be able to specify if the absDiff(); should look at the capture or a buffer but I am not sure how.
Any suggestions?
-----
import codeanticode.syphon.*;
import hypermedia.video.*;
import java.awt.Rectangle;
import java.awt.Point;
import SimpleOpenNI.*;
OpenCV opencv;
SimpleOpenNI context;
SyphonServer syphonout;
int w = 640;
int h = 480;
int cvthreshold = 80;
int blur = 5;
PImage img; //This becomes the infrared feed from the Kinect
boolean find=true; //Not really sure what this does
PFont font;
void setup() {
size( w, h, P3D );
context = new SimpleOpenNI(this);
context.enableIR();
opencv = new OpenCV( this );
opencv.capture(640, 480);
syphonout = new SyphonServer(this, "SyphonOut");
println( "Drag mouse inside sketch window to change threshold" );
println( "Press space bar to record background image" );
}
void draw() {
background(0);
context.update(); //Updates kinect feed
PImage img = context.irImage(); // Asigns IR feed to "img"
opencv.copy(img); // Gives kinect IR to OpenCV's buffer