We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi,
I'm now here, so let me introduce myself.
I'm Alex, from sunny London, in the UK.
I'm new to Processing, but have some programming experience, in various languages. I've also used GLSL shaders quite a lot in Quartz Composer, but the particular job I'm working on at the moment seems difficult to achieve in QC, so I'm thinking of giving Processing a try, instead, which brings me to my questions:
Is it possible to chain several 2D fragment shaders together, so that the result of one shader is passed into the next shader as a texture? My particular scenario involves a further complication:
Is it possible to chain 2 shaders as above, and have the second shader feed back into the first, and also go through a third shader, which renders the result to the screen? I've attempted to illustrate what I have in mind below.
|| [shader 0] > [shader 1] || > [shader 2] > [screen]
|| ||
=== < [prev. frame] < ===
I'm aware this is kinda throwing myself in at the deep end, but any advice would be much appreciated.
Thanks guys,
a|x
Answers
I think this might help with the chaining, perhaps. https://github.com/cansik/processing-postfx I'm not sure if it would help with the feedback part, though.
a|x
Hello
i just skip trough some pictures in QuartComposer seems more Node based (Visual Programing). If you enter the world of Processing, you have to write code. With a custom request, their is not way around, than read one or the other topic on how computer graphics work.
Processing is driven by JOGL
you can ask @cansik if he can spend 5minutes more to add more functionality to his awesome postfx lib.
Search in this Forum.
Just hack, explore, their is always more then way to do something (one implemention is just faster, other qualiltes then the other)
@Lord_of_the_Galaxy hava a nice solution to write the previous frame to texture
I'm not realy sure, but you can read it between the lines, some computation is already set up in the background, when you start https://github.com/processing/processing/wiki/Advanced-OpenGL#vertex-coordinates-are-in-model-space
Can't find in the source code right now, but i think, they render the Framebuffer to a texture per default.
Here is my attempt to write and store data without access to a low level api
Or Look at the source the truth is somewhere out there :)
https://github.com/processing/processing/blob/master/core/src/processing/opengl/FrameBuffer.java#L56
Yes that's possible. Just look through some of the code on this forum, you should find it.
I can't help now, but if this problem remains till next week, I'll post my own already tested code here. Just drop me a PM or a mention on this thread.
Thanks very much, guys, I'll follow up those tips.
@nabr QC is node-based, but also has programmable nodes (patches, in QC terminology) for GLSL shaders, OpenCL and Core Image Filter kernels and JavaScript.
I've used them quite a lot (albeit long ago), so I have a reasonable grasp of shader basics.
I've been messing around with code since the mid 90s, but never used Java, so more advanced Processing code looks unfamiliar to me. I'm sure I'll get the hang of it, though.
Thanks again, guys.
a|x
@tonebust
Okay, great! Just few notes, - i'm also new here, use processing for quick scetches and of course learning. P3D is made more for students and beginners (just to have a great time), so i feel their a overhead, like i what just hint(EVERYTHING) especially for shadertoy mode https://github.com/processing/processing/blob/master/core/src/processing/opengl/PGraphicsOpenGL.java#L1853
reason why most of the guys here use P2D. the problem is the vertex of position is not right they use QUAD per default (cant find the source right now) instead of
so you have to push the vertices back
gl_FragCoord =vec4(position.xy-1,0,1.); //i put *.5-1 negative numbers always feels wrong, just a personal preference
Processing is also made to run on, as much as possible variarity, of devices, if you on a WIN32_LEAN_AND_MEAN machine and updated your graphic drives the last 7 years you can bypass the most of the stuff also write a texture direct from fragment without a FBO
https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_shader_image_load_store.txt (sorry Mac) downside you usually end up with 500 line of OpenGL state machine.
your example from above pseudocode
@nabr Cool, thanks for that.
I think I can combine two of the shaders into one, which should simplify things somewhat.
So I need to render the shader to an FBO, then that FBO can be piped into the final shader which is displayed.
Is that correct?
I've never used FBOs before, but as I understand it, they're essentially textures that are rendered offscreen but can be used in other shaders, or fed back into the original shader for iterative effects.
Or do I need two FBOs, if I'm going to feed the result of the first shader back into the shader on the next frame?
a|x
@nabr do you mean texture coords with P2D go from -1 to 1? That does seem odd, if so..
It makes sense for vertex coords to have 0, 0, 0 at the approximate centre of the mesh, for easier rotation.
a|x
@toneburst
when i do this
I should land here somewhere, (would be interessting to debug) https://github.com/processing/processing/blob/master/core/src/processing/opengl/PGraphicsOpenGL.java#L1601-L1702
at least here https://github.com/processing/processing/blob/master/core/src/processing/opengl/PGraphicsOpenGL.java#L1679
10min reading time: https://learnopengl.com/#!Advanced-OpenGL/Framebuffers
Thanks @nabr I will have a read.
a|x
So your first request to chain multiple shaders is implemented in my library. Check out the Custom Shader part of the readme, to add your own shaders.
The second problem you mention sounds like a loop shader? Right? You then just have to store the textures you want to keep alive over one frame. Or am I wrong with my understanding?
@nabr I'm going to extend my library with more shaders. Are there other features you would like to add? :)
Hi @cansik!
That's cool. Will check out your library.
Re. feedback; yes, that's correct. I want to chain two shaders, then pass the result on to a third shader for rendering to the output, and also back into the first shader for feedback effects.
a|x
@cansik I think, im more the path tracer guy. Also already tones of shaders on the web. I like your lib! We keep in touch.
@toneburst you now 1 week into processing, you have to share some code :) so the comunity can catch up on your progress and help.
im experimenting, maybe it's useful, i leave it here.
change some values see what they do
Hi,
got sidetracked and ended up trying to do this in Quartz Composer, failed, so I'm now back in Processing...
@cansik, can I access the previous frame's texture using ppixels when using postFX?
a|x
@toneburst this is a nice technique by @Lord_of_the_Galaxy
offscreen render
ppixels = backbuffer
http://glslsandbox.com/e#207.3
@nabr ah, so ppixels will only work if the shader renders to the canvas, presumably..
a|x
@toneburst
this is what i do: i try different things out, and start with something very simpel.
I managed add my own shader to the Conway example in the Processing examples. I also added a brush-size control using the controlsP5 library.
Fragment Shader:
Still very much a WIP.
I'm currently trying to do all the various parts of the projects in separate sketches, before trying to stitch them together in some way.
a|x
@toneburst Ctrl+O will format Code :)
https://en.wikipedia.org/wiki/Markdown
.PDE file?
Thanks @nabr. It's more nicely-formatted now :)
a|x
I've made some progress.
This nicely applies a bloom effect to the output of the conway shader.
My hope is to replace the negate shader, from the PostFX CustomShaderEffect example with my own, but for the moment, I'm just using the negate shader from the example.
Unfortunately, if I uncomment the line
.custom(negatePass)
I get an OpenGL error
ERROR: 0:12: Regular non-array variable 'vertColor' may not be redeclared
Is there a way to fix this, @cansik?
a|x
Oops, I did accidentally redeclare the varying 'vertColor' in negateFrag.glsl.
I think I was messing around with it, and didn't realise I'd saved it.
Now it works :) Time to try my own shader.
a|x
So, now I have:
And my shader:
Which doesn't work. The ppixel texture in the feedback shader seems to contain nothing, so changing the feedback amount just has the effect of mixing between the output of the first pass and black, and making the image darker. I'd intended to create a kind of 'trails' effect.
Does this mean I have to manage my own FBOs manually, in order to get feedback working?
I'm able to use ppixels for feedback, while I do the initial drawing of the Conway shader, before I apply the PostFX passes.
a|x
Aha.. if I comment out the line
supervisor.clearPass(pass);
in my feedbackPass class declaration, it works...
I've added a function to my feedbackPass class to allow me to change one of the shader uniforms from the draw loop in the main sketch. No idea if this is the correct way to do this, as I'm unfamiliar with Java (and OOP in general, to my shame).
FeedbackPass.pde:
And the main sketch .pde:
a|x
@cansik though I've sort of got feedback to work, I'd really like to be able to access previous frames data from inside the custom pass class instance, so I'm not relying on the ppixels class.
You mentioned saving the texture for the next frame. That's exactly what I need to do :)
I guess I'd change the shader to something like:
Any tips on how I'd do that in my custom pass class definition?
I've added a new PGraphics, as I guess I'll need to render to this at some stage, and bind it to the old_texture uniform in my shader. Not sure how I'd go about that though.
Any tips for a processing noob gratefully accepted.
a|x
I have this updated custom pass definition:
I think I just need to work out how to copy the pass pixels to the background texture on line 38. Not sure how to go about that, though.
a|x
Too easy...
This seems to work :)
I have a feeling that setting 'oldTex' to 'pass' like this (line 45) may not be the fastest way to do it.
I tried arrayCopy, but got a nullPointer exception.
a|x
incidentally, is there a way to stop Markdown attempting to insert links whenever there's an "@Override" line in my code?
a|x
@toneburst @cansik Any chance you guys know what's happening here - https://forum.processing.org/two/discussion/22385/reaction-diffusion-using-glsl-works-different-from-normal
OK, next step: I need to get an image from a webcam into a shader.
It doesn't look like an instance of the Capture object can have a shader attached to it directly, in the same way a PGraphics instance can.
I guess this means I need to copy the pixels from the capture to a PGraphics instance. Sounds like this might be expensive, though. Do I literally have to loop through all the pixel array of the Capture, copying to the PGraphics?
Maybe there's a better way to do the copy, all in one go?
a|x
Got image capture working :)
a|x
Still having issues with feedback within the custom shader passes, though. I'm going to post about that in a new thread.
a|x
@Lord_of_the_Galaxy I'm a complete Processing noob, I'm afraid. I've messed around with Grey Scott RD shaders in the past, but using Quartz Composer. I'm struggling with implementing feedback effects properly in Processing at the moment, so we're probably in similar positions.
Hope you get the problem sorted.
a|x
@toneburst
I went few pages back in this thread, i don't know how all of you guys live with this copy paste attitude and then setup .pde for testing (already 10 minutes pass)
It's 2017 can you zip it ?
So i (someone) can unzip it and start with debug? Sorry, it's just me, and i'm lazy as hell. I mean, if some1 fear to get a virus or something, github, mediafire, googledrive etc. has now a shared folder, so the user can preview the files befor they get zipped.
Best
Good points. I'm new to this forum, so don't know how things are done here.
https://www.dropbox.com/s/fkgh0zu2yhrpxdi/Conway_Webcam.zip?dl=0
Here's a zip of the file.
a|x
It runs very slowly on my old MacBook Pro. I'm pretty sure it's the Graphic instance copying that's required for texture feedback that's slowing things to a crawl. A very similar setup would run at 60fps in Quartz Composer, even on my ancient laptop.
a|x
Also, it doesn't handle different camera output formats well.
a|x
@toneburst
"don't know how things are done here." You the first one who zip it. Im also new to this forum, and it makes me sick to copy paste stuff til i get something runnig.
Thank you man!
I mean the users here used to do it for the past 10 years. PHP or whatever script to zip me a source file will take 10 lines of code ?
No problem! I like to keep things in separate files, which makes it more difficult to paste the whole thing into a post here. It's getting a bit long for that, anyway, as you observed ;)
a|x
I'm kinda thinking I should rewrite the whole thing to use native OpenGL textures and FBOs wherever possible. That way, maybe the whole thing can be kept on the GPU.
No real idea if this is possible, though.
a|x
I'm also quite interested in Geometry Shaders. I've seen examples of their use in Processing, but don't know how easy it is to achieve.
a|x
@toneburst
Yes, their a bug in your code. @cansik know's a quick fix i think
It was not working, on my PC i changed few lines to make it work
https://processing.org:8443/tutorials/video/
You have different declarations like P2D and then P3D. I found it's best, while debuging your stetch, when you use nummers in the size(nummer, nummer, P3D)
- and then createGraphic(nummer,nummer) >instead of width, height. Thouse values can evaluate druing the "startup" and "miss" the setup. default is 100x100 px so you would run a createGraphic, or a box(size) Object at a lower res ...etc.
but i looks already crazy and fun. keep digging it!
@nabr Hmm.. thanks for that.
I'm not sure it's improved things on my system, though it's quite a cool glitch effect. The camera feed stutters and 'sticks' in quite a nice way, though I think it might give me a bit of a headache if I looked at it for too long.
It's cool there's a way to force the captured image to a particular size, though. That's what I was looking for, in fact.
I'll have a look tomorrow, and see if it works better on my (much more recent) work machine.
Thanks again!
a|x
@toneburst
yes,
so only this part works, but their is something wrong with the "filter". Run FX gives me the conwayPass only, on/off will switch between cam output and Conway shader
@nabr thanks. It's for a student at my work, in fact. She has an idea for an installation for her final project, and I'm trying to implement it in Processing, since that's what her group are taught.
a|x
@toneburst
the feedback looks great! what is the goal here to make the conway shader work on top of the video output ?
@nabr thanks re. the feedback! I got a bit sidetracked into messing around with that for a while. Still have a little idea I'd like to integrate into it.
Am I right in thinking that copying the pixels in the custom PostFX pass definition is a performance bottleneck, do you think? I can't imagine it's done on the GPU.
a|x