Chaining Shaders and Feedback



  • edited May 2017

    The idea is that the camera feed will be replaced with a looped video.

    The video will go through a vertex-displacement shader, and be rendered as a point-cloud. Hence my interest in Geometry Shaders. I had thoughts of rendering each point as a screen-facing quad/billboard, so I could add a nice depth of field shader to render more/less soft-edged circles in each billboard according to Z-position. But I digress..

    The point-cloud will then go through the Conway shader and Feedback passes.

    The 'runFX' switch is supposed to be triggered when the brightest pixel in a live Kinect (v.1) depthmap image exceeds a brightness threshold, so the output will switch between the point-cloud image and the Conway shader.

    At the moment, runFX is passed into the Conway shader, and determines if the shader simply passes through the input texture, or starts the Game of Life and displays that instead.

    I'm thinking, rather than doing this in the fragment shader, though, it would probably be better to create two different render functions in the sketch itself, one that simply renders the point cloud straight to the canvas, and the other that grabs the latest frame from the above, and feeds it into the Conway shader. That way, the capture will never be happening at the same time as the Conway shader, which should save resources.

    Wild speculation, though..


  • edited May 2017

    @toneburst a lot of work.

    for now (tell me if i wrong): read/write an image 2xdrawcalls (at 60pfs) x 2passes in x 3 methods

    you can try

    should be (just) few ticks faster ..readPixels vs copyTexture

  • I'll try get(), thanks for the tip.


  • @nabr turns out get() doesn't work on PGraphics objects, sadly.


  • edited May 2017


    you have to render to a rect before running get() or pass the "canvas" (is already a image=texture) direct as image arg.


    void draw() {
      algorithm.set("first", first);
      first = false;
      for(int i = 0; i < N_PASS; i++){
        pg.rect(0, 0, pg.width, pg.height);//<< RECT
      image(pg, 0, 0);//<<PG IMAGE

  • @nabr ah, thanks, will try that!


  • @toneburst Sorry for my late reply! What you've done looks amazing :) It's nice to see people play with my library.

    Instead of all that pixel loading stuff and copying the camera image onto your camFrame I would recommend to do it like that (processing standard):

    camFrame.image(cam, 0, 0);

    It runs smoothly with 30 fps, I will check it with a 60 fps camera later, but I don't have one here atm.

    Just a hint when working with Capture: Sometimes the camera takes a lot of time to init & load and then processing has a timeout. Maybe it is makes more sense to load the the camera in the draw method. Here is an example:

    It is written in Kotlin, but it should be easy understandable for Java developers :)

    Is there something else I can help you with?

  • @cansik thanks! And thanks for the tips. I'll change my capture setup.

    I'm really excited about the VBO stuff. I used Quartz Composer a lot a few years ago, and was sometimes frustrated by the fact I couldn't use low-level OpenGL commands. It's nice to know that's possible in Processing.


  • @cansik just a quick note to say the alternative feedback texture method you mentioned works fine.

    I haven't tried the camera setup in the draw method, as I'll eventually be replacing the camera feed with a looped video.


  • edited June 2017

    @cansik thanks also for your offer of assistance. I may well be taking you up on that :)

    Thanks again,


Sign In or Register to comment.