Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

  • Shadertoy shader not rendering until screen window is resized if surface.setResizable(true)

    Interesting, thanks for testing. For the mouseY, it does behave a bit differently than on shadertoy's website even though it's clamped. On shadertoy's website, the Y values change at about the 1/3 down when it starts rotating vertically, while in processing the Y values don't start rotating until it's at 2/3 to the bottom or so.

    For me:

    Win 10, 64 bit

    DEVICE ... AMD Mobility Radeon HD 5800 Series

    GLSL ..... #version 440 core / 4.40.0

    GL ....... 4.5

  • Shadertoy shader not rendering until screen window is resized if surface.setResizable(true)

    I'm trying to figure out DwShadertoy's implementation of Shadertoy shaders, however there are two big issues I've ran into. The first is that the window won't render until it is resized when setResizable is true. The second is the mouse Y cords are not mapped correctly. MouseY seems to be shifted 2/3rds of the screen down. Trying to compensate for the shift doesn't seem to have any effect.

    See the example script "WetStone" included in pixelflow for reference.

     if(mousePressed){
          toy.set_iMouse(mouseX, height-1-mouseY, mouseX, height-1-mouseY);
        }
    

    This is the default in all the example scripts and I'm trying to figure out why that is and why it doesn't work.

    /**
     * 
     * PixelFlow | Copyright (C) 2017 Thomas Diewald - www.thomasdiewald.com
     * 
     * https://github.com/diwi/PixelFlow.git
     * 
     * A Processing/Java library for high performance GPU-Computing.
     * MIT License: https://opensource.org/licenses/MIT
     * 
     */
    
    
    
    import com.thomasdiewald.pixelflow.java.DwPixelFlow;
    import com.thomasdiewald.pixelflow.java.imageprocessing.DwShadertoy;
    
    
      //
      // Shadertoy Demo:   https://www.shadertoy.com/view/ldSSzV
      // Shadertoy Author: https://www.shadertoy.com/user/TDM
      //
    
      DwPixelFlow context;
      DwShadertoy toy;
    
      public void settings() {
        size(1280, 720, P2D);
        smooth(0);
      }
    
      public void setup() {
        surface.setResizable(true);
    
        context = new DwPixelFlow(this);
        context.print();
        context.printGL();
    
        toy = new DwShadertoy(context, "data/WetStone.frag");
    
        frameRate(60);
      }
    
    
      public void draw() {
        if(mousePressed){
          toy.set_iMouse(mouseX, height-1-mouseY, mouseX, height-1-mouseY);
        }
    
        toy.apply(g);
    
        String txt_fps = String.format(getClass().getSimpleName()+ "   [size %d/%d]   [frame %d]   [fps %6.2f]", width, height, frameCount, frameRate);
        surface.setTitle(txt_fps);
      }
    
  • Implementing a gooey effect with a shader

    Maybe I should have been more clear. My question is specific to this application, not GLSL in general. I've been looking at GLSL on shadertoy and the syntax is quite different especially at the top of the files when things are declared and initialized.

  • How to port Shadertoy multipass GLSL shader

    Does the PixelFlow library with shaderToy work with android?

  • How to port shaders from shaderfog, shadertoy to Processing?

    As a very confused opengl beginner complete beginner who is struggling to get started I have a few questions.

    1. Is it possible to use GLSL directly in processing or do you need to port to PShader?

    2. Is there any tutorial for how to port shaders to PShader or use publicly available shaders? I want to start learning opengl shaders, but I don't know where to start since it seems like everywhere I can find examples, the language or implementation is different from what processing uses.

    2.5 How do I take a shader for shadertoy and play around with it in processing?

    1. Is there anybody making easy to follow tutorials like Dan Shiffman does for shaders?

    2. How universal are shaders? For example, if I learn how to do shaders in processing can I take those and use them with openframeworks for example?

  • Some ShaderToy shaders converted for Processing

    Yeah, nice!

    You could teak it a bit, if you like to.
    Shadertoy using WebGL 2.0 means #version 300 es.
    Default in processing OpenGL version 120 or #version 100 es.

    Means gl_Fragcolor is deprecated. It works in Java, and this is great. I don't think this is a performance lost. But you know, just to be nice to the compiler ... So the best way would be if you copy +paste a String befor every shader, like shadertoy. with the version and the main function.

    Second if you not going to use a 3D Shape like a cube, sphere -
    you don't really need to do a Post Processing step like processing's filter() method. This will read all pixels form the Framebuffer -->make a rect -->send it to the shaders -->Colorbuffer, calling shader() and rect() directly will be a performance boost.

    @cansik have a nice lib for this kind of stuff, make a pull
    https://github.com/cansik/processing-postfx

  • Some ShaderToy shaders converted for Processing

    Hi all

    Sharing some of the shaders I have converted from ShaderToy to work with Processing 3... Will be adding more as I do them.

    Currently there is:

    • 3 types of VHS effect
    • Glitch
    • Sobel edge detection with neon outline
    • Convert different colours to different ASCII characters

    https://github.com/georgehenryrowe/ShadersForProcessing3

    :)

  • PShader Library Example, Processing 3, "Cannot compile fragment shader..."

    @kfrajer
    https://processing.org/reference/PShader.html
    is related to the Processing Exampels, if you are unhappy make a pull on github :)

    Processing filter() is a Shader. You render your scene (from so called framebuffer) to a texture, then you map this texture to your window as (fullwindow) quad, and with this texture you do your image processing steps.

    In case of a blur filter, (their are many solutions but the most common is) you create a convolution matrix and shift the x,y positions of pixels according to it's neighbours.

         
         
    @mnoble
    Yes a shader is a hole new world, when you came from Java CPU background, but only at first sight, it's an other programing language, underneath you have to do the same math.

    CPU

    GPU

    JOGL is the language that send data from CPU to GPU.
    And GLSL (OpenGL Shading Language ) is the language that takes the data and instruct the GPU to perform specific tasks.

    https://en.wikipedia.org/wiki/Graphics_pipeline#Shader
         

    " I want to blur moving images "

    sounds like you are searching for
    https://www.google.de/search?q=Motion+Blur
         
         

  • Contributing P5js

    @kfrajer did a little java-shadertoy "getting started" - hello world for you :)

    PImage img ;
    void setup() {
      size(640, 360);
      img = createImage(width, height, ARGB);
    }
    void draw() {
      img.loadPixels();
      for (int x=img.pixels.length;x-->0;)
            img.pixels[x] = color(-(254.99f-x%img.width), 254.99f-x/img.height*.5, 254.99f*(sin(frameCount/2*.1f)*0.5+0.5), 255.0f );
        img.updatePixels();
        image(img,0,0);
    }
    
  • How to port Shadertoy multipass GLSL shader

    maybe it can be usefull...try with PixelFlow library, it implement shaderToy

  • gpu fft/histogram with shader

    Yes, they render it to a texture(iChannel0, vec2(iuv.x, 0.1)).x; x is the Javascript Audiobuffer.

    It should look like similar to this :

    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 1, 1, 0, gl.RGB, gl.UNSIGNED_BYTE, new Uint8Array([ Audiobuffer , g 0 , b 0, a 0 ]));
    

    Without any knowledge of JOGL it could be hard. I have some knowledge, but i'm out of time, also i never did it, but i'm going to soon, i will report back if i make some progress

    Otherwise look at the source code of shadertoy, its very descriptive

  • gpu fft/histogram with shader

    hi guys...i'm working on this shader... it generate a real time fft/histogram of an audio/(image, video) input.

    Now the problem is: it works but i've created it on shaderToy, i'm trying to import it in processing, but i've a little problem. i have no idea how shaderToys pass the audio/image input to the glsl texture function (iChannel0 ....). in other words: how do I do to create the appropriate input to generate the same effect I generate here: https://www.shadertoy.com/

    To try it, you can copy and paste the following code into any of the examples that the site displays and then press the play button at the bottom left. then select the iChannel0 box select music and choose one of the proposed tracks.

    void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    
        vec2 uv = fragCoord.xy / iResolution.xy;
    
        vec2 res = floor(400.0*vec2(10.15, iResolution.y/iResolution.x));
    
        vec3 col = vec3(0.);
    
        vec2 iuv = floor( uv * res )/res;
    
        float fft = texture(iChannel0, vec2(iuv.x, 0.1)).x; 
        fft *= fft;
    
        if(iuv.y<fft) {
            col = vec3(255.,255.,255.-iuv.y*255.);
        }
    
        fragColor = vec4(col/255.0, 1.0);
    }
    

    below the code for the implementation in processing:

    import ddf.minim.*;
    import com.thomasdiewald.pixelflow.java.DwPixelFlow;
    import com.thomasdiewald.pixelflow.java.imageprocessing.DwShadertoy;
    
    Minim minim;
    AudioInput in;
    DwPixelFlow context;
    DwShadertoy toy;
    PGraphics2D pg_src;
    
    void settings() {
      size(1024, 820, P2D);
      smooth(0);
    }
    void setup() {
      surface.setResizable(true);
    
      minim = new Minim(this);
      in = minim.getLineIn();
    
      context = new DwPixelFlow(this);
      context.print();
      context.printGL();
    
      toy = new DwShadertoy(context, "fft.frag");
      pg_src = (PGraphics2D) createGraphics(width, height, P2D);
    
      pg_src.smooth(0);
    
      println(PGraphicsOpenGL.OPENGL_VENDOR);
      println(PGraphicsOpenGL.OPENGL_RENDERER);
    }
    
    void draw() {
      pg_src.beginDraw();
      pg_src.background(0);
      pg_src.stroke(255);
      //code to convert audio input to a correct input for the function :toy.set_iChannel(0, pg_src);
      pg_src.endDraw();
    
      toy.set_iChannel(0, pg_src);
      toy.apply(this.g);
    
      String txt_fps = String.format(getClass().getSimpleName()+ "   [size %d/%d]   [frame %d]   [fps %6.2f]", width, height, frameCount, frameRate);
      surface.setTitle(txt_fps);
    }
    

    and the glsl code for fft.frag file (is the same as before but I added the environment variables that shaderToys generates automatically and some other pixelFlow library instance to communicate with the fft.frag file):

    #version 150
    
    #define SAMPLER0 sampler2D // sampler2D, sampler3D, samplerCube
    #define SAMPLER1 sampler2D // sampler2D, sampler3D, samplerCube
    #define SAMPLER2 sampler2D // sampler2D, sampler3D, samplerCube
    #define SAMPLER3 sampler2D // sampler2D, sampler3D, samplerCube
    
    uniform SAMPLER0 iChannel0; // image/buffer/sound    Sampler for input textures 0
    uniform SAMPLER1 iChannel1; // image/buffer/sound    Sampler for input textures 1
    uniform SAMPLER2 iChannel2; // image/buffer/sound    Sampler for input textures 2
    uniform SAMPLER3 iChannel3; // image/buffer/sound    Sampler for input textures 3
    
    uniform vec3  iResolution;           // image/buffer          The viewport resolution (z is pixel aspect ratio, usually 1.0)
    uniform float iTime;                 // image/sound/buffer    Current time in seconds
    uniform float iTimeDelta;            // image/buffer          Time it takes to render a frame, in seconds
    uniform int   iFrame;                // image/buffer          Current frame
    uniform float iFrameRate;            // image/buffer          Number of frames rendered per second
    uniform vec4  iMouse;                // image/buffer          xy = current pixel coords (if LMB is down). zw = click pixel
    uniform vec4  iDate;                 // image/buffer/sound    Year, month, day, time in seconds in .xyzw
    uniform float iSampleRate;           // image/buffer/sound    The sound sample rate (typically 44100)
    uniform float iChannelTime[4];       // image/buffer          Time for channel (if video or sound), in seconds
    uniform vec3  iChannelResolution[4]; // image/buffer/sound    Input texture resolution for each channel
    
    void mainImage(out vec4 fragColor, in vec2 fragCoord) {
        vec2 uv = fragCoord.xy / iResolution.xy;
        vec2 res = floor( 1000.0*vec2(1.0, iResolution.y/iResolution.x) );
    
        vec3 col = vec3(0.);
    
        vec2 iuv = floor( uv * res )/res;
    
        float f = 1.11-abs(fract(uv.x * res.x));
        float g = 1.11-abs(fract(uv.y * res.y));
    
        float fft = texture(iChannel0, vec2(iuv.x, 0.2)).x; 
        fft = 1.*fft*fft;
    
        if(iuv.y<fft) {
            col = vec3(74.0,82.0,4.0);
        }
    
        fragColor = vec4(col/255.0, 1.0);
    }
    
  • Frosted Glass (Blurry Glass) on Processing

    Could you copy an area of the screen into a small rectangular PGraphics and then run the shader on the PGraphics?

    I believe there are some "frosted glass" shaders listed on shadertoy -- but I am no shader expert.

  • Resources for learning glsl / shaders

    @Per

    Hello yes no Problem! You are welcome. To make myself clear, becourse i have the feeling i should.
    many Shading Languages out in the wild:
    3D Modeling Software like Blender uses OGL
    3dMax uses DirectX variant, IOS has Mental,
    Windows has HLSL,
    OpenGL uses GLSL
    Vulcan,SPIR,OpenCL etc...

    The point is, that the underlying concept is always the same, (yes MATH) and it just the language that changes.

    So in Processing a Verctor of 4 floats is PVector(0,0,0,0); in GLSL it is vec4(0,0,0,0);

    So their is absolutly nothing special about GLSL! It is how history evolves. You can do exactly the same operations in Processing on the CPU, the only thing is shaders are faster! But the contept is the same!

    If you have already read the bookofshaders, you have a good understanding of the language, - what you need now, is the knowledge how to communicate between Processing CPU and GLSL GPU. So what you are asking for ?

    How to draw a line in the fragment a shader ? Then go on shadertoy, but those shaders are limited to the world of fragment shader ...and in 2017 their are far more ways to solve a problem. Shadertoy, or bookofshader are playgrounds for enthusiast, ... can i implement a hole GUI only in the Fragment Shader, or a Camera system and so on. And again, it is just the language, the people took concepts of CPU and translate it for the GPU fragment shader.

    So if you asking for how the communication to the GPU work, becourse you want the precompute a Line, and add a Camera, user Input etc. then their is no way, that you learing at least JOGL.

    Are their good JOGL books out their, i don't know,
    I just want to point this out.

    this is the repro of the guy who implemented Shaders in processing, i think https://github.com/codeanticode/pshader-experiments

    Good luck.

  • Resources for learning glsl / shaders

    Thanks for your input. Im browsing through shadertoy a lot and modifying codes. Thats great. Im looking for something to read parallell with this. On the web or book.

    Thanks once again!

  • Resources for learning glsl / shaders

    Hello

    yes, Processing is using JOGL. And Im pretty sure, it does, at least on Windows.

    The extension starts with OpengGL 4.3, this last Macs Version (if you are on Windows you can use 5.0) https://www.khronos.org/opengl/wiki/Compute_Shader (On the other hand it is also Java Virtual Machine, im not sure here)

    Just in case - History - : Back then when OpenGL was mostly used by Scientist and Student of Computer Sience. The OpenGL Comunity (not Khronos Club like now (also thanks to Appel and Steave )) have to ask the Major Graphics Card company, who usually own the License to newest Tech, if OpenGL can use the e.g. Shadow Map Source code, or AA whatever,(back then NVidia was only one of many manufratrures, and the newsest Game in the Store was 34MB big). So they have to make an extension and it still look like this: https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_compute_shader.txt

    Back then there were no shaders, only extensions, and the most work was done by the CPU, so the programer would code something like OpenGL.getmetheExtension("Shadowmap") and tweak some parameters, like in Processing on the CPU.

    My point: BookofShaders is great! But it relies on WebGL 1.0 (2007!) and it uses the very first shader version #version 100 of Shaders ...so yes, the math is the same, but most of the stuff, can be done much more performant and easier.

    I thinking about an Art Installation (Trillions of Particels), you will get the Stuff as fast as possible out, otherwise if you are a Crossplattform pprogramer, who will also support some "older" Smartphones.

    If you really want to get into Shader Programing, you have to going trough History of Computer Science. And yes, do some RAW OpenGL programing.

    And second, just grab a shader from shadertoy and start experimenting with it, - that is what everyone do.

    Good Luck Keep us posted

  • Resources for learning glsl / shaders

    Also note that the PixelFlow library now includes a direct wrapper for Shadertoy demos:

  • (SOLVED)glitch shader, it compiles but doesn't work!!

    If you are working with a ShaderToy demo, then the PixelFlow library has a wrapper for them so that (I believe, untested) you don't even need to adapt them:

    https://forum.processing.org/two/discussion/comment/106411/#Comment_106411

  • (SOLVED)glitch shader, it compiles but doesn't work!!

    hi guys...i'm working on this glitch shader i've found on this website: https://www.shadertoy.com/view/ls3Xzf

    now i am editing part of the code to make it compile by processing.

    It actually compile but it doesn't work

    processing code:

    PShader glitch;
    PImage img, glitchImg;
    
    void setup() {
      size(540, 540, P3D);
      img=loadImage("pietroGogh.jpg");
      glitchImg=loadImage("glitch.jpg");
      glitch = loadShader("glitch2.glsl");
      stroke(255);
      background(0);
      glitch.set("iResolution", new PVector(800., 600., 0.0) );
    }
    
    void draw() {
      strokeWeight(1);
      //glitch.set("iGlobalTime", random(0, 60.0));
      glitch.set("iTime", millis());
      if (random(0.0, 1.0) < 0.4) {
        shader(glitch);
      }
      image(img, 0, 0);
    }
    

    glsl code:

    uniform sampler2D texture;
    uniform vec2 iResolution;
    uniform float iTime;
    //varying vec4 vertTexCoord;
    
    
    float rand () {
        return fract(sin(iTime)*1e4);
    }
    
    void main()
    {
        vec2 uv = gl_FragCoord.xy / iResolution.xy;
    
        vec2 uvR = uv;
        vec2 uvB = uv;
    
        uvR.x = uv.x * 1.0 - rand() * 0.02 * 0.8;
        uvB.y = uv.y * 1.0 + rand() * 0.02 * 0.8;
    
        // 
        if(uv.y < rand() && uv.y > rand() -0.1 && sin(iTime) < 0.0)
        {
            uv.x = (uv + 0.02 * rand()).x;
        }
    
        vec4 c;
        c.r = texture(texture, uvR).r;
        c.g = texture(texture, uv).g;
        c.b = texture(texture, uvB).b;
    
    
        float scanline = sin( uv.y * 800.0 * rand())/30.0; 
        c *= 1.0 - scanline; 
    
        //vignette
        float vegDist = length(( 0.5 , 0.5 ) - uv);
        c *= 1.0 - vegDist * 0.6;
    
        gl_FragColor = c;
    }
    

    does anyone have any idea why this happens? sorry if there are trivial mistakes, but i'm quite new to shader language thank you all!!