Data packing into a Texture

edited April 2014 in GLSL / Shaders

I need to send some data to my fragment shader and it seems the best way to do this is through packing it into a texture and then unpacking it in the shader. Normally this is done with a floating point texture, but it seems Processing doesn't have floating point textures.

Does anyone have a good resource / basic version on how to pack data into a texture and unpack it in a shader?

Thanks

Answers

  • What kind of data do you need to send? There is also the set command.

  • Thanks clankill3r, I need to send x,y, position data. I have been using the set command to send two arrays (one for x, one for y) but I can only have a maximum of 60 elements in each array. Seems this might be a standard limit (depending on hardware) and if you need to send more data you put it into a texture somehow.

    Any advice?

  • I think you get a quicker answer asking on a different forum with more focus on shaders. If you do, i would love to see the answer.

    You can also set a texture with set(). The shaders work with normalized values.

  • Sorry for the delay, I think I got this sorted actually! I'll try to dig it out and post it...

  • Cool. This forum could use some nice shader examples.

  • OK so not entirely sure I got this 100% sorted but I think that is due to my lack of knowledge about shader languages and how to properly unpack the data; But I definitely managed to pack it and send it and get hold of it in the shaders!

    I used the ToonShader example as the starting point for my sketch. I have a series of points with x,y positions and I pack those into a texture/PImage by 'assigning' each point to a pixel and in each pixel the x to the red value and the y to green value. The blue value isn't used. So you use the pixel array as an array to store the x,y values. However you need to map these x,y values to a new value range, so your width and height need to be mapped to between 0-255 to fit in the RGB scheme of a pixel (doesn't seem you can 'overload' the values for a each RGB component and give it a value of more than 255). Confusingly in the shader it seems 0-255 is converted to 0-1 so when you unpack the data you need to multiply it by the width or height of your sketch. I am still not sure if I have the proper way of unpacking and using the data yet

    I won't go through all the stuff written in the shaders, there are a lot better more knowledgeable explanations out there than I could write and I have learnt and used them.

    In the end I have created a moving blobby mass similar to voronoi patterns (but not quite) which I really like, and there does seem to be a bit of a glitch or two in when each point resets itself, but I can confirm that it runs waaaaay faster as a shader. the regular version could only run 6 points on 640x480 at 60fps, compared to 128 points on 1280x720 at 60fps, or 1920x1080 at 30fps. To run that size without shaders means 1fps and maybe not even that!

    I'm sure there are better examples of passing data in textures to a vertex or fragment shader out there, (I think mine is a bit intense for a shader as I've heard loops and if/else statements slow them down a lot) but this is what I used it for and I am please with the results in speeding up my sketch and impressed by the power of shaders.

    Below is the code if it is of any use to anyone, or if anyone has any improvements. All feedback welcomed:

    Processing Code

    PShader myShader;
    
        float sfx,sfy;
    
        PImage tex;
    
    
        void setup() {
          size(1280, 720, P3D);
          noStroke();
          fill(255);
          sfx = float(width);
          sfy = float(height);
          myShader = loadShader("ShaderFrag.glsl", "ShaderVert.glsl");
          //
          tex = createImage(128,1,ARGB);
          tex.loadPixels();
          for(int p=0; p<tex.pixels.length; p++) {
            //
            tex.pixels[p] = color(random(width)/width*255.0,random(height)/height*255.0,0,255.0);
          }
    
          for(int r=0; r<tex.pixels.length; r++) {
            println( blue(tex.pixels[r]) );
          }
        }
    
        void draw() {
    
          shader(myShader);
    
          myShader.set("sfx",sfx);
          myShader.set("sfy",sfy);
          myShader.set("tex",tex);
          //
          background(0);
          directionalLight(204, 204, 204, 0, 0, -1);
    
          // to render to
          rect(0,0,width,height);
          //
          for(int p=0; p<tex.pixels.length; p++) {
            //
            float r = red( tex.pixels[p] ) + 1.0;
            float g = green( tex.pixels[p] );
            float b = blue( tex.pixels[p] );
            if(r > 255.0) {
              r = 0.0;
              //g = random(height)/height*255.0;
            }
    
            tex.pixels[p] = color(r,g,b);
          }
          tex.updatePixels();
          //
          println(frameRate);
        }  
    

    Vertex Shader:

    #define PROCESSING_LIGHT_SHADER
    
    uniform mat4 modelview;
    uniform mat4 transform;
    uniform mat3 normalMatrix;
    
    attribute vec4 vertex;
    attribute vec3 normal;
    
    
    void main() {
    
      // Vertex in clip coordinates
      gl_Position = transform * vertex;
    
    }
    

    Fragment Shader

    #ifdef GL_ES
    precision mediump float;
    precision mediump int;
    #endif
    
    uniform float sfx;
    uniform float sfy;
    
    uniform sampler2D tex;
    
    
    void main() {
    
      vec4 color;
    
      float minDist = 10000.0;
      int id = 0;
    
      for(float y=0.0; y<1.0; y++) {
        for(float x=0.0; x<128.0; x++) {
          //
          vec2 position = vec2(x/128.0,y/1.0);
          vec4 col = texture2D( tex, position );
          //
          vec2 v = vec2(col.r*sfx,col.g*sfy);
          float d = distance(v,gl_FragCoord.xy);
          if(d < minDist) {
            minDist = d;
          }
        }
      }
    
    
      float c = minDist/255.0;
    
      color = vec4(c,c*0.5,0.0,1.0);
    
      gl_FragColor = color;
    
    }
    
  • edited August 2014

    Sounds cool. I will look into it when I have time.

    You could look at this (not for the physics but how to use OpenCL):

    https://github.com/codeanticode/clphysics

    CLPhysics is a library for the Processing programming language and environment that allows to run physics simulations, such as particle systems under gravitational attraction, with OpenCL. It is based on the traer.physics library by Jeffrey Traer Bernstein

    I think that could be a more proper way.

  • OK thanks, I'll try and take a look soon.

Sign In or Register to comment.