We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I have a question about something we're all familiar with, which is anti-aliasing.
That is post processing anti-aliasing , don't confuse it with no render-time MSAA kind of thing,
Just a flat picture of pixels(or texels), no vertices or anything is what I mean, and running the same function on every one of them.
The fragments has to know their positions on screen and from that information be able to map it to a color in some "shared storage" and also be able to lookup neighboring pixels colors from it.. And this is very important persistent storage, or at least some way to keep working where it left of next frame, maybe writing over the texture, array, buffer or whatever I'm very new to GLSL as you might understand...
So the output on screen gets blurrier and blurrier as time go by, rather than a static single-pass processed picture as if the shader was starting over from the same unprocessed input every frame.
And not because you doing incrementally more work/passes every frame, but because what the shader is working on is "persistent" even after it's shown on screen.
So why would I wan't to do this?
Well it's not the most exciting thing, but it's something I could learn something from and tweak to do exciting things if it works.
And if it's impossible to do this with GLSL then I can leave any thoughts about endevours in it aside.