The idea is that the camera feed will be replaced with a looped video.
The video will go through a vertex-displacement shader, and be rendered as a point-cloud. Hence my interest in Geometry Shaders. I had thoughts of rendering each point as a screen-facing quad/billboard, so I could add a nice depth of field shader to render more/less soft-edged circles in each billboard according to Z-position. But I digress..
The point-cloud will then go through the Conway shader and Feedback passes.
The 'runFX' switch is supposed to be triggered when the brightest pixel in a live Kinect (v.1) depthmap image exceeds a brightness threshold, so the output will switch between the point-cloud image and the Conway shader.
At the moment, runFX is passed into the Conway shader, and determines if the shader simply passes through the input texture, or starts the Game of Life and displays that instead.
I'm thinking, rather than doing this in the fragment shader, though, it would probably be better to create two different render functions in the sketch itself, one that simply renders the point cloud straight to the canvas, and the other that grabs the latest frame from the above, and feeds it into the Conway shader. That way, the capture will never be happening at the same time as the Conway shader, which should save resources.
It runs smoothly with 30 fps, I will check it with a 60 fps camera later, but I don't have one here atm.
Just a hint when working with Capture: Sometimes the camera takes a lot of time to init & load and then processing has a timeout. Maybe it is makes more sense to load the the camera in the draw method. Here is an example:
@cansik thanks! And thanks for the tips. I'll change my capture setup.
I'm really excited about the VBO stuff. I used Quartz Composer a lot a few years ago, and was sometimes frustrated by the fact I couldn't use low-level OpenGL commands. It's nice to know that's possible in Processing.
Answers
The idea is that the camera feed will be replaced with a looped video.
The video will go through a vertex-displacement shader, and be rendered as a point-cloud. Hence my interest in Geometry Shaders. I had thoughts of rendering each point as a screen-facing quad/billboard, so I could add a nice depth of field shader to render more/less soft-edged circles in each billboard according to Z-position. But I digress..
The point-cloud will then go through the Conway shader and Feedback passes.
The 'runFX' switch is supposed to be triggered when the brightest pixel in a live Kinect (v.1) depthmap image exceeds a brightness threshold, so the output will switch between the point-cloud image and the Conway shader.
At the moment, runFX is passed into the Conway shader, and determines if the shader simply passes through the input texture, or starts the Game of Life and displays that instead.
I'm thinking, rather than doing this in the fragment shader, though, it would probably be better to create two different render functions in the sketch itself, one that simply renders the point cloud straight to the canvas, and the other that grabs the latest frame from the above, and feeds it into the Conway shader. That way, the capture will never be happening at the same time as the Conway shader, which should save resources.
Wild speculation, though..
a|x
@toneburst a lot of work.
for now (tell me if i wrong): read/write an image 2xdrawcalls (at 60pfs) x 2passes in x 3 methods
you can try https://processing.org/reference/PImage_get_.html
should be (just) few ticks faster ..readPixels vs copyTexture
I'll try get(), thanks for the tip.
a|x
@nabr turns out get() doesn't work on PGraphics objects, sadly.
a|x
@toneburst zip it.
@toneburst
you have to render to a rect before running get() or pass the "canvas" (is already a image=texture) direct as image arg.
https://forum.processing.org/two/discussion/22385/reaction-diffusion-using-glsl-works-different-from-normal
@nabr ah, thanks, will try that!
a|x
@toneburst Sorry for my late reply! What you've done looks amazing :) It's nice to see people play with my library.
Instead of all that pixel loading stuff and copying the camera image onto your camFrame I would recommend to do it like that (processing standard):
It runs smoothly with 30 fps, I will check it with a 60 fps camera later, but I don't have one here atm.
Just a hint when working with
Capture
: Sometimes the camera takes a lot of time to init & load and then processing has a timeout. Maybe it is makes more sense to load the the camera in the draw method. Here is an example:https://github.com/bildspur/dynamic-shape-projection/blob/master/src/main/kotlin/ch/bildspur/dysp/Sketch.kt#L107-L124
It is written in Kotlin, but it should be easy understandable for Java developers :)
Is there something else I can help you with?
@cansik thanks! And thanks for the tips. I'll change my capture setup.
I'm really excited about the VBO stuff. I used Quartz Composer a lot a few years ago, and was sometimes frustrated by the fact I couldn't use low-level OpenGL commands. It's nice to know that's possible in Processing.
a|x
@cansik just a quick note to say the alternative feedback texture method you mentioned works fine.
I haven't tried the camera setup in the draw method, as I'll eventually be replacing the camera feed with a looped video.
a|x
@cansik thanks also for your offer of assistance. I may well be taking you up on that :)
Thanks again,
a|x