Problem using images from a webcam as textures in OpenGL
in
Contributed Library Questions
•
2 years ago
Hello everybody,
I am working on a Java-application that makes use of processing for OpenGL. In fact, I am using mt4j-Framework which integrates the handling of touch events, since the final application is supposed to be running on a touchscreen. Anyhow, it doesn't matter if the framework is familiar to you - the crucial part is that it uses processing to display GL-content.
The problem which I haven't been able to solve is the following: I'm getting PImages from my webcam and want to display those on a simple rectangular plane.
This is what I do before drawing the whole scene and after setting up the cameras:
myImg.loadPixels(); //myImg is a PImage
myImg.getCameraFrame(myImages[0].pixels, 0); //function for my webcam to grab the current image
myImg.updatePixels();
rectangle.setTexture(myImg); //function of the framework that sets up myImg as a texture for the rectangle-object
This, however, does not work. There is no texture being displayed on the rectangular plane. On the other hand, the following code DOES work:
myImg.loadPixels();
myImg.getCameraFrame(myImages[0].pixels, 0);
myImg.updatePixels();
myImg.save("test1.jpg");
anotherImg = myApp.loadImage("test1.jpg");
rectangle.setTexture(anotherImg);
So, if I save the image to disk and reload it afterwards, everything is fine - the webcam images are displayed properly. If I don't do that, but directly use the PImage that has been captured from the camera, they aren't displayed at all. Obviously, I cannot save and load on every frame because this is awful slow...
Can anybody help me out with this? Are there some properties for GLTextures that might be set in one case that aren't set in the other one? Or is there a different approach for the whole thing I could try out?
Thanks a lot for any kind of hint that might help me out!! I've searched the web again and again and just can't find a solution...
Philipp
P.S.: This sketch uses the same webcam-API that I use. Here, it is no problem to use the just captured images directly - everything is displayed as intended. This is why I think that the OpenGL part of my program must be the problem...
// Imports
import cl.eye.*;
// Camera Variables
int numCams;
CLCamera myCameras[] = new CLCamera[2];
PImage myImages[] = new PImage[2];
int cameraWidth = 640;
int cameraHeight = 480;
int cameraRate = 30;
// Animation Variables (not required)
boolean animate = false;
float zoomVal, zoomDelta;
float rotateVal, rotateDelta;
void setup(){
// Library loading via native interface (JNI)
// If you see "UnsatisfiedLinkError" then target the library path otherwise leave it commented out.
CLCamera.loadLibrary("C:/CLEyeMulticam.dll");
// Verifies the native library loaded
if(!setupCameras()) exit();
}
void draw(){
// Loops through available cameras and updates
for(int i = 0; i < numCams; i++)
{
// --------------------- (image destination, wait timeout)
myCameras[i].getCameraFrame(myImages[i].pixels, (i==0) ? 1000 : 0);
myImages[i].updatePixels();
image(myImages[i], cameraWidth*i, 0);
}
}
boolean setupCameras(){
println("Getting number of cameras");
// Checks available cameras
numCams = CLCamera.cameraCount();
println("Found " + numCams + " cameras");
if(numCams == 0) return false;
// create cameras and start capture
for(int i = 0; i < numCams; i++)
{
// Prints Unique Identifier per camera
println("Camera " + (i+1) + " UUID " + CLCamera.cameraUUID(i));
// New camera instance per camera
myCameras[i] = new CLCamera(this);
// ----------------------(i, CLEYE_GRAYSCALE/COLOR, CLEYE_QVGA/VGA, Framerate)
myCameras[i].createCamera(i, CLCamera.CLEYE_COLOR_PROCESSED, CLCamera.CLEYE_VGA, cameraRate);
// Starts camera captures
myCameras[i].startCamera();
myImages[i] = createImage(cameraWidth, cameraHeight, RGB);
}
// resize the output window
size(cameraWidth*numCams, cameraHeight);
println("Complete Initializing Cameras");
return true;
}
2