samplerCube and ambient reflection

edited July 2015 in GLSL / Shaders

Hello,

I'm trying to make a glsl shader to do reflection in a skybox. I created my skybox in processing with a textured cube, but i don't find a solution to set a samplerCube uniform in my shader.

Any idea how to do this ?

Thanks :)

Answers

  • Nobody can help me or just give me a way to do this ?

  • edited September 2014

    You're lucky! I just figured out how to do this today. I am using a cubemap shader for it.

    First go to the DomeProjection example, and copy it's "cubemapfrag.glsl", "cubemapvert.glsl" files to your sketch's data folder.

    The following code creates the necessary cubemap shader object. Make sure the cubemapName string variable corresponds to the directory where your skybox textures are located. They need to be named "posx.jpg", "negx.jpg", "posy.jpg", "negy.jpg", "posz.jpg", "negz.jpg" for this to work.

      PGL pgl = beginPGL();
      // create the OpenGL-based cubeMap
      IntBuffer envMapTextureID = IntBuffer.allocate(1);
      pgl.genTextures(1, envMapTextureID);
      pgl.activeTexture(PGL.TEXTURE1);
      pgl.enable(PGL.TEXTURE_CUBE_MAP);  
      pgl.bindTexture(PGL.TEXTURE_CUBE_MAP, envMapTextureID.get(0));
      pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_S, PGL.CLAMP_TO_EDGE);
      pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_T, PGL.CLAMP_TO_EDGE);
      pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_R, PGL.CLAMP_TO_EDGE);
      pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_MIN_FILTER, PGL.LINEAR);
      pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_MAG_FILTER, PGL.LINEAR);
    
    
      //Load in textures
      String cubemapName = "cubemap_texture";
      IntBuffer glTextureId = IntBuffer.allocate(1);
    
      String[] textureNames = { "posx.jpg", "negx.jpg", "posy.jpg", "negy.jpg", "posz.jpg", "negz.jpg" };
      PImage[] textures = new PImage[textureNames.length];
      for (int i=0; i<textures.length; i++) {
        textures[i] = loadImage("cubemaps/" + cubemapName + "/" + textureNames[i]);
    
        //Uncomment this for smoother reflections. This downsamples the textures
       // textures[i].resize(20,20);
      }
    
      // put the textures in the cubeMap
      for (int i=0; i<textures.length; i++) {
        int w = textures[i].width;
        int h = textures[i].height;
        textures[i].loadPixels();
        int[] pix = textures[i].pixels;
        pgl.texImage2D(PGL.TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, PGL.RGBA, w, h, 0, PGL.RGBA, PGL.UNSIGNED_BYTE, java.nio.IntBuffer.wrap(pix));
      }
    
      endPGL();
    
      // Load cubemap shader.
      PShader cubemapShader = loadShader("cubemapfrag.glsl", "cubemapvert.glsl");
      cubemapShader.set("cubemap", 1);
    

    Once you have created your shader you can call it in the draw function as follows:

    void draw() {
      background(0);
      shader(cubemapShader);
      //mesh is a PShape object
      shape(mesh);
      resetShader();
    }
    

    This will work on any geometry, also complex models generated with Toxiclibs or Hemesh, as long they are converted to a PShape object. I think the Processing 3D geometry primitives will work as well.

    Make sure smooth() is not called in setup, somehow this causes the shader to not display the mesh.

    You can get some really cool metal/reflection effects if you experiment with the textures. This site has some pretty sweet skybox/cubemap textures: http://www.humus.name/index.php?page=Textures

    Good luck!

  • edited April 2017

    Thank you very much for your help, that workish.

    I have now to understand this code, because i'm really new in OpenGL.

    BTW, i have inverted color, and inverted reflection, if you know why, i'm interested.

    image alt text

  • Just some information about the issue noted in my previous post : - Inverted color : attention Processing work in ARGB, and opengl in RGBA so you have to convert your texture - inverted reflection : Origin processing is up / left, origin opengl is down/left so you have to transform your texture or invert the way you read in your texture buffre in the fragment shader file

  • Great! Didn't know this. Do you the code that does the inversion/transform?

  • ARBG to RGBA (before buffering texture in OpenGL) int[] pix = textures[i].pixels; int[] rgbaPixels = new int[pix.length]; for (int j = 0; j< pix.length; j++) { int pixel = pix[j]; rgbaPixels[j] = 0xFF000000 | ((pixel & 0xFF) << 16) | ((pixel & 0xFF0000) >> 16) | (pixel & 0x0000FF00); }

    To invert the y texture, in the fragment shader you just have to invert the reflection Y vector.

  • edited September 2014

    Faster alternative for aRGB[] to RGBa[] conversion: =:)

    /**
     * aRGB to RGBa (v1.1)
     * by GoToLoop (2014/Sep)
     *
     * forum.processing.org/two/discussion/7039/
     * samplercube-and-ambient-reflection
     */
    
    static final int toRGBa(int[] argb)[] {
      return toRGBa(argb, new int[argb.length]);
    }
    
    static final int toRGBa(int[] argb, int[] rgba)[] {
      int i = 0;
      for (int p: argb)  rgba[i++] = p<<010 | p>>>030;
      return rgba;
    }
    
    void setup() {
      size(300, 200, JAVA2D);
      noLoop();
      background(#2080A0);
    
      loadPixels();
      color[] rgba = toRGBa(pixels);
    
      println(hex(pixels[0]) + " -> " + hex(rgba[0]));
    }
    
  • Nice bit of bit shifting GoToLoop you could combine the two methods. This way the rgba can be re-used once created.

    // Returns an array of colours in argb format
    // The array is created if rgba is null or of different length
    int[] toRGBa(int[] argb, int[] rgba) {
      if(rgba == null ||  rgba.length != argb.length)
        rgba = new int[argb.length];
      int i = 0;
      for (int p: argb)  rgba[i++] = p<<8 | p>>24;
      return rgba;
    }
    
  • edited September 2014

    1st of all, v1.0 was bugged. Bitshift operator >> fills up vacant bits on the left w/ the negative bit (MSB)! :-&
    Replaced >> w/ >>> in v1.1. I wonder what other programing languages w/o >>> can do about it! 8-}

    You could combine the 2 methods.

    It forces the caller to use a spurious null as 2nd argument every time: toRGBa(pixels, null); I-)
    The method w/ 1 argument is the main entry. While the 1 w/ 2 parameters is its specialized overloaded form.

    In the 2nd form, we can pass the source as the destination array too, modifying it in situ:

    loadPixels();
    toRGBa(pixels, pixels);
    updatePixels();
    

    And let's say we've got a bunch of 800x600 PImage objects.
    Rather than letting toRGBa() auto instantiate a new array for each call, we create our own int[],
    and pass it around as 2nd argument, so it's reused for all of them!
    Zero trash left to be collected by Java's GC later on! \m/

    This way the rgba can be re-used once created.

    I don't get what you mean by that. Both forms return a ready-to-use int[] array! o->

  • edited September 2014

    @quark, as a compromise between your unified toRGBa() and my demand for no spurious 2nd null argument, I've come up w/ a varargs-based method re-writing: ;))

    /**
     * aRGB to RGBa II (v1.12)
     * by GoToLoop (2014/Sep)
     *
     * forum.processing.org/two/discussion/7039/
     * samplercube-and-ambient-reflection
     */
    
    static final int toRGBa(int[]... dots)[] {
      if (dots == null)  return null;
    
      int argb[] = dots[0]
        , rgba[] = dots.length > 1? dots[1] : null
        , i = 0;
    
      if (rgba == null || rgba.length != argb.length)
        rgba = new int[argb.length];
    
      for (int p: argb)  rgba[i++] = p<<010 | p>>>030;
      return rgba;
    }
    
    void setup() {
      size(300, 200, JAVA2D);
      noLoop();
      background(#2080A0);
    
      loadPixels();
      print(hex(pixels[0]) + " -> ");
    
      color[] rgba = toRGBa(pixels);
      println(hex(rgba[0]));
    
      //toRGBa(pixels, pixels);
      //updatePixels();
      //println(hex(pixels[0]));
    }
    
  • I never thought of the >>> either :\">

    I also didn't explain how the array gets reused

    // Declare an int array reference
    int[] rgba = null;
    
    // The first time we the method it creates the array rgba
    // after that rgba is reused unless the argb array changes size
    rgba = toRGBa(argb, rgba);
    
    
    // Returns an array of colours in argb format
    // The array is created if rgba is null or of different length
    int[] toRGBa(int[] argb, int[] rgba) {
      if(rgba == null ||  rgba.length != argb.length)
        rgba = new int[argb.length];
      int i = 0;
      for (int p: argb)  rgba[i++] = p<<8 | p>>>24;
      return rgba;
    }  
    

    Ooops ... I think I have gone OTT on this :-B

    This only needs to do this once per image and unless the images are the same size the rgba aray is never reused so perhaps this is all we need

    // Returns an array of colours in rgba format
    int[] toRGBa(int[] argb) {
      int[]  rgba = new int[argb.length];
      int i = 0;
      for (int p: argb)  rgba[i++] = p<<8 | p>>>24;
      return rgba;
    }  
    

    Sigh...

  • edited September 2014

    Ooops ... I think I have gone OTT on this.

    As explained before, I've split toRGBa() into 2 methods in order to have regular & specialized versions! :-@
    I've taken advantage of Java's overloading parameter signature, so it transparently invokes the most appropriate method, depending whether the invoker wishes to pass his/her own target array or not! >:/
    Anyways, that doesn't matter anymore since you've forced me to write a unified varargs almighty version! \m/

  • I didn't force you to do anything :(

  • I was just kidding! I even suffixed it w/ emoticon: \m/

  • Hello,

    I have another issue with my skybox reflection : when i move the camera it seems that the skybox follow de camera rotation. If anyone can help me it will be great, because i didn't find the solution in a week of search. Here is the code :

    import peasy.*; import java.nio.IntBuffer;

    IntBuffer envMapTextureID; PShader cubemapShader; PeasyCam cam; PImage tex;

    void setup() { size(800, 640, P3D); frameRate(60); tex = loadImage("city.jpg");

    PGL pgl = beginPGL(); // create the OpenGL-based cubeMap IntBuffer envMapTextureID = IntBuffer.allocate(1); pgl.genTextures(1, envMapTextureID); pgl.activeTexture(PGL.TEXTURE1); pgl.enable(PGL.TEXTURE_CUBE_MAP);
    pgl.bindTexture(PGL.TEXTURE_CUBE_MAP, envMapTextureID.get(0)); pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_S, PGL.CLAMP_TO_EDGE); pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_T, PGL.CLAMP_TO_EDGE); pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_WRAP_R, PGL.CLAMP_TO_EDGE); pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_MIN_FILTER, PGL.LINEAR); pgl.texParameteri(PGL.TEXTURE_CUBE_MAP, PGL.TEXTURE_MAG_FILTER, PGL.LINEAR);

    String[] textureNames = { "posx.jpg", "negx.jpg", "posy.jpg", "negy.jpg", "posz.jpg", "negz.jpg" }; PImage[] textures = new PImage[textureNames.length]; for (int i=0; i<textures.length; i++) { textures[i] = loadImage(textureNames[i]); //Uncomment this for smoother reflections. This downsamples the textures //textures[i].resize(20,20); }

    // put the textures in the cubeMap for (int i=0; i<textures.length; i++) { int w = textures[i].width; int h = textures[i].height; textures[i].loadPixels(); int[] pix = textures[i].pixels; int[] rgbaPixels = new int[pix.length]; for (int j = 0; j< pix.length; j++) { int pixel = pix[j]; rgbaPixels[j] = 0xFF000000 | ((pixel & 0xFF) << 16) | ((pixel & 0xFF0000) >> 16) | (pixel & 0x0000FF00); } pgl.texImage2D(PGL.TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, PGL.RGBA, w, h, 0, PGL.RGBA, PGL.UNSIGNED_BYTE, java.nio.IntBuffer.wrap(rgbaPixels)); }

    endPGL();

    // Load cubemap shader. cubemapShader = loadShader("cubemapfrag.glsl", "cubemapvert.glsl"); cubemapShader.set("cubemap", 1);

    cam = new PeasyCam(this, width/2.0, height/2.0, 0, 180); }

    void draw() { background(0); pushMatrix(); translate(width/2.0, height/2.0, 0); scale(1000); noStroke(); TexturedCube(tex); popMatrix(); PVector vCam = new PVector(cam.getPosition()[0], cam.getPosition()[1], cam.getPosition()[2]);

    pushMatrix(); shader(cubemapShader); translate(width/2.0, height/2.0, 0); box(50); //sphere(50); resetShader(); popMatrix(); }

    void TexturedCube(PImage tex) { beginShape(QUADS); texture(tex);

    // +Z "front" face normal(0, 0, 0); vertex(-1, -1, 1, 1024, 1024); vertex( 1, -1, 1, 2048, 1024); vertex( 1, 1, 1, 2048, 2048); vertex(-1, 1, 1, 1024, 2048);

    // -Z "back" face vertex( 1, -1, -1, 3072, 1026); vertex(-1, -1, -1, 4096, 1026); vertex(-1, 1, -1, 4096, 2046); vertex( 1, 1, -1, 3072, 2046);

    // +Y "bottom" face vertex(-1, 1, 1, 1026, 2048); vertex( 1, 1, 1, 2046, 2048); vertex( 1, 1, -1, 2046, 3072); vertex(-1, 1, -1, 1026, 3072);

    // -Y "top" face vertex(-1, -1, -1, 1026, 0); vertex( 1, -1, -1, 2046, 0); vertex( 1, -1, 1, 2046, 1024); vertex(-1, -1, 1, 1026, 1024);

    // +X "right" face vertex( 1, -1, 1, 2048, 1026); vertex( 1, -1, -1, 3072, 1026 ); vertex( 1, 1, -1, 3072, 2046); vertex( 1, 1, 1, 2048, 2046);

    // -X "left" face vertex(-1, -1, -1, 0, 1026); vertex(-1, -1, 1, 1024, 1026); vertex(-1, 1, 1, 1024, 2046); vertex(-1, 1, -1, 0, 2046);

    endShape(); }


    Vertex

    uniform mat4 transform; uniform mat4 modelview; uniform mat3 normalMatrix;

    attribute vec4 vertex; attribute vec3 normal;

    varying vec3 reflectDir;

    void main() {

    gl_Position = transform * vertex; //transform est une matrix modelviewprojection

    vec3 eyeNormal = normalMatrix * normal;

    vec4 vertexPos = modelview * vertex; //on utilise ici modelview au lieu de mvp sinon les normal des vertex subisse la perspective induite par la camera et la reflection n'est pas bonne

    vec3 vertexEyePos = vec3(vertexPos.x, vertexPos.y, vertexPos.z);

    reflectDir = reflect (vertexEyePos, eyeNormal);

    }


    Fragment

    uniform samplerCube cubemap;

    varying vec3 reflectDir;

    void main() { vec3 refle = vec3(reflectDir.x, -reflectDir.y, reflectDir.z); gl_FragColor = textureCube(cubemap, refle); }

  • maybe @kosowski or @codeanticode have an idea ? :)

  • For information i found a solution by passing the camera matrix to the shader. here is the code if anybody are intrested in : https://github.com/P0ulp/cubemapGL/tree/master

    If anybody can correct me if my code is ugly, let me know :)

  • edited March 2015

    You don't need to create the tex images prior... you can acquire them from the skymap image directly. Thus you can use any texture with this demo:

    int[] ix = {tex.width/2, 0,tex.width/4,tex.width/4,tex.width/4,3/4tex.width};
    int[] iy = {tex.height/3, tex.height/3, 0,2/3tex.height,tex.height/3,tex.height/3};
    int iw= tex.width/4;
    int ih = tex.height/3;

    PImage[] textures = new PImage[6];
    for (int i=0; i<textures.length; i++) {
    textures[i] = tex.get(ix[i],iy[i],iw,ih);
    }

  • Hello, After digging this subject i have one question about my code because it works in a strange way.

    In theory to do reflection in the shader i have to compute the reflect vector in camera space (to avoid to pass the camera position to the shader) and then i have to convert this reflect vector in world space to get the right texture from the cubemap opengl texture. (ie http://antongerdelan.net/opengl/cubemaps.html)

    The issue i have is my code (https://github.com/P0ulp/cubemapGL/) works when i convert the reflect vector with the modelview matrix, but it didn't with the modelviewInv, and according to my (poor) knowledge in shader, to convert a vector from eye space to world space i need the modelviewInv matrix, isn't it ?

    So is a bug in processing ? can i have an explantation ?

    Cheers,

  • Anybody can help me ? a specialist like @codeanticode ? :)

  • @p0ulp, I think the problem is due to the fact that, by default, the modelview is just the identity matrix because the vertex positions are already transformed. This is done because Processing batches vertices together to make rendering faster.

    There is some discussion in this issue: https://github.com/processing/processing/issues/2904

    You can disable the batching using hint(DISABLE_OPTIMIZED_STROKE), which results in the modelview matrix (and modelviewinv) to be computed as one would normally expect.

  • Thank you very much Andre to have take the time to answer me. About the github issue, i know it pretty well because i'm the creators of this github thread :)

    I tried to use the "hint(DISABLE_OPTIMIZED_STROKE);" but i get exactly the same issue : when i use the modelviewInv matrix to pass my reflected vector from view space to model space, the x-axis seems inverted, and when i use the modelview it works. According to the tutorial (http://antongerdelan.net/opengl/cubemaps.html) it's not a right behavior (but i'm not an expert ...), this is the modelviewInv matrix i have to use to pass frome view space to world space.

    Another strange behavior is that the matrix i get with "PGraphics3D g3 = (PGraphics3D)g; matCam = g3.modelview;" is not the same i get in the shader with the uniform variable, because i'm new in shaders and opengl i don't know if it's a bug or a total misunderstanding from me.

    I you can give me some light in that shader / opengl darkness it will be really great :)

    PS : i've commit the new version of my code with the hint at the same repo of my last message.

  • Sorry to harass you @codeanticode, but if you have another look to my code and check that this not an issue in the opengl (so it will be an issue in my understanding and my code), it would be really nice ! :)

  • edited September 2015

    Hi @p0ulp, I set out to do the same project and would like to share some things I've figured out that are not mentioned in this post. I also read your github issue thread and the code you uploaded which are linked-to in the previous posts.

    I did some tests and some digging in the Processing source code and found out that PMatrix3D stores matrices in row-major order, while OpenGL uses matrices in column-major order. The set() method in PShader which takes in a PMarix3D doesn't transpose the matrix before sending it to the shader. This is the relevant part:

    public class PShader implements PConstants {
      // ...
      public void set(String name, PMatrix3D mat, boolean use3x3) {
        if (use3x3) {
          float[] matv = { mat.m00, mat.m01, mat.m02,
               mat.m10, mat.m11, mat.m12,
               mat.m20, mat.m21, mat.m22 };
          setUniformImpl(name, UniformValue.MAT3, matv);
        } else {
          float[] matv = { mat.m00, mat.m01, mat.m02, mat.m03,
               mat.m10, mat.m11, mat.m12, mat.m13,
               mat.m20, mat.m21, mat.m22, mat.m23,
               mat.m30, mat.m31, mat.m32, mat.m33 };
          setUniformImpl(name, UniformValue.MAT4, matv);
        }
      }
      // ...
    }
    

    This means that this code that was suggested to pass the modelviewInv matrix is not entirely correct:

    shader(cubemapShader);
    cubemapShader.set("modelviewInv", ((PGraphicsOpenGL) g).modelviewInv);
    

    You'd need to do something like this instead:

    shader(cubemapShader);
    PMatrix3D mvInv = ((PGraphicsOpenGL) g).modelviewInv.get();
    mvInv.transpose();  
    cubemapShader.set("modelviewInv", mvInv);  
    

    Of course there's probably a more efficient way to do that as you can read the matrix elements directly and maybe skip some steps.

    If you use hint(DISABLE_OPTIMIZED_STROKE), you can easily test that this works because either of these two lines in your vertex shader should behave the same way:

    gl_Position = modelviewInv * modelview * transform * vertex;  
    gl_Position = transform * vertex;
    

    Finally, about the reflection method itself:

    In most of the tutorials I read, as opposed to the one you referenced above, they use the reflection vector directly to sample the cubeMap texture. They don't multiply by the inverse of the view matrix at the end to "convert to world space".

    The way I see it this makes sense because the reflection vector we get from the reflect() operation extends from the reflecting object itself, and not from the position of the camera, so you don't need to do an inverse view operation to get back to world space.

    Anyway, if you still want to try that method you'll need the inverse of the view matrix and not the modelviewInv, which is a different thing. There's a cameraInv matrix available that you can get the same way as the modelviewInv. I suspect that one works as the view matrix but I couldn't say. If you try it let us know. Good luck.

  • Hi Felzam,

    And thank you very much for your reply. It opened tome new way of digging. But to be honest, i'm a bit confused about how processing manage the projection / Model / view matrix pipeline ... it's not clear at all.

    I'made some test like you advise on a really simple sketch, and it seems to work, like you can see here : https://github.com/P0ulp/glTest

     
    gl_Position = modelviewInv * modelview * transform * vertex;  
    gl_Position = transform * vertex; 

    Behave the same way in a hint(DISABLE_OPTIMIZED_STROKE) mode. And in hint(DISABLE_OPTIMIZED_STROKE) mode, i have to transpose my matrix before sending it to the shader unlike i'm not in the hint(DISABLE_OPTIMIZED_STROKE) mode. So your note seems right :)

    In my reflection exemple,

     
    gl_Position = modelviewInv * modelview * transform * vertex;  
    gl_Position = transform * vertex; 

    Behave the same way in a hint(DISABLE_OPTIMIZED_STROKE) mode too. But, my reflection didn't work at all with this transposed matrix hint(DISABLE_OPTIMIZED_STROKE) mode.

    So i didn't find a solution to get this working ...

    About the method itself, it seems unavoidable to give to the shader information about camera, because if you move the camera, the incident vector (line between the eye and the vertex) is not right compute, isn't it ?

  • Hi @p0ulp,

    I don't think you need the camera position for this method. When you convert the position to eye coordinates (pos = modelview * vertex), that means you now have a vector that extends from the camera position to the object itself. That vector is exactly the same as the incident vector, which you use directly in the reflect() and refract() functions.

    Like I mentioned before, most tutorials for this that I've seen don't use the inverse view matrix at any point. The tutorial you linked to does, so if you want to use this method try sending the cameraInv matrix and using that instead of the modelviewInv matrix.

    For me it worked by using the position vector and the normal vector in eye coordinates directly without any other operations.

  • Hi Felzam,

    And thank you again for the time you have to help me ! In fact, you're right if i use the vertex in eye coordinate in the reflect() function, i get the right incident vector, but with this, i get the reflected vector from the reflect() function in eye coordinate too.

    So when i move the camera, it's like that i move the cubemap too and the reflection is wrong : did yo know or already try what i mean ?

    So if you have any clue i will really appreciate :)

Sign In or Register to comment.