gpu fft/histogram with shader

edited November 2017 in GLSL / Shaders

hi guys...i'm working on this shader... it generate a real time fft/histogram of an audio/(image, video) input.

Now the problem is: it works but i've created it on shaderToy, i'm trying to import it in processing, but i've a little problem. i have no idea how shaderToys pass the audio/image input to the glsl texture function (iChannel0 ....). in other words: how do I do to create the appropriate input to generate the same effect I generate here:

To try it, you can copy and paste the following code into any of the examples that the site displays and then press the play button at the bottom left. then select the iChannel0 box select music and choose one of the proposed tracks.

void mainImage(out vec4 fragColor, in vec2 fragCoord) {

    vec2 uv = fragCoord.xy / iResolution.xy;

    vec2 res = floor(400.0*vec2(10.15, iResolution.y/iResolution.x));

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float fft = texture(iChannel0, vec2(iuv.x, 0.1)).x; 
    fft *= fft;

    if(iuv.y<fft) {
        col = vec3(255.,255.,255.-iuv.y*255.);

    fragColor = vec4(col/255.0, 1.0);

below the code for the implementation in processing:

import ddf.minim.*;

Minim minim;
AudioInput in;
DwPixelFlow context;
DwShadertoy toy;
PGraphics2D pg_src;

void settings() {
  size(1024, 820, P2D);
void setup() {

  minim = new Minim(this);
  in = minim.getLineIn();

  context = new DwPixelFlow(this);

  toy = new DwShadertoy(context, "fft.frag");
  pg_src = (PGraphics2D) createGraphics(width, height, P2D);



void draw() {
  //code to convert audio input to a correct input for the function :toy.set_iChannel(0, pg_src);

  toy.set_iChannel(0, pg_src);

  String txt_fps = String.format(getClass().getSimpleName()+ "   [size %d/%d]   [frame %d]   [fps %6.2f]", width, height, frameCount, frameRate);

and the glsl code for fft.frag file (is the same as before but I added the environment variables that shaderToys generates automatically and some other pixelFlow library instance to communicate with the fft.frag file):

#version 150

#define SAMPLER0 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER1 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER2 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER3 sampler2D // sampler2D, sampler3D, samplerCube

uniform SAMPLER0 iChannel0; // image/buffer/sound    Sampler for input textures 0
uniform SAMPLER1 iChannel1; // image/buffer/sound    Sampler for input textures 1
uniform SAMPLER2 iChannel2; // image/buffer/sound    Sampler for input textures 2
uniform SAMPLER3 iChannel3; // image/buffer/sound    Sampler for input textures 3

uniform vec3  iResolution;           // image/buffer          The viewport resolution (z is pixel aspect ratio, usually 1.0)
uniform float iTime;                 // image/sound/buffer    Current time in seconds
uniform float iTimeDelta;            // image/buffer          Time it takes to render a frame, in seconds
uniform int   iFrame;                // image/buffer          Current frame
uniform float iFrameRate;            // image/buffer          Number of frames rendered per second
uniform vec4  iMouse;                // image/buffer          xy = current pixel coords (if LMB is down). zw = click pixel
uniform vec4  iDate;                 // image/buffer/sound    Year, month, day, time in seconds in .xyzw
uniform float iSampleRate;           // image/buffer/sound    The sound sample rate (typically 44100)
uniform float iChannelTime[4];       // image/buffer          Time for channel (if video or sound), in seconds
uniform vec3  iChannelResolution[4]; // image/buffer/sound    Input texture resolution for each channel

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord.xy / iResolution.xy;
    vec2 res = floor( 1000.0*vec2(1.0, iResolution.y/iResolution.x) );

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float f = 1.11-abs(fract(uv.x * res.x));
    float g = 1.11-abs(fract(uv.y * res.y));

    float fft = texture(iChannel0, vec2(iuv.x, 0.2)).x; 
    fft = 1.*fft*fft;

    if(iuv.y<fft) {
        col = vec3(74.0,82.0,4.0);

    fragColor = vec4(col/255.0, 1.0);


  • edited November 2017

    Yes, they render it to a texture(iChannel0, vec2(iuv.x, 0.1)).x; x is the Javascript Audiobuffer.

    It should look like similar to this :

    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 1, 1, 0, gl.RGB, gl.UNSIGNED_BYTE, new Uint8Array([ Audiobuffer , g 0 , b 0, a 0 ]));

    Without any knowledge of JOGL it could be hard. I have some knowledge, but i'm out of time, also i never did it, but i'm going to soon, i will report back if i make some progress

    Otherwise look at the source code of shadertoy, its very descriptive

  • So yeah, i'm looking into it, should be not that hard, except me searching for hours in the java docs.

    I'm also not sure if what i can offer you, will match your needs, -expectations.

    I code in processing for some reasons and also for the xp, otherwise in native OpenGL C++, also WebGL, - i just find myself copy pasting while learning, here i have offen a hard time, figured out what is going on. no pain no gain.

    Here is how to load a Texture, and Sound is very similar

    If you like we can collaborate:

    • how you would get a Buffer from an AudioInput
    • will you use an Mic as Input, or .*wav File
    • are you on Mac or PC
    • do you need low end Smartphones support ?

    eg. You could search for an Load Buffer from Audiofile equivalent.

    I'm still out of time, maybe at the weekend i get a working .pde


Sign In or Register to comment.