(SOLVED) LowLevelGl & texture , it doesn't work and really don't know why

edited April 2015 in GLSL / Shaders

Hello !

I would like to set a PImage to a "low-level-shader" but it doesn't work...

Because it's boring to test a code with shaders, I put my processing project in a zip, and send it here

beginfill.com/processing/LowLevelGL_test.zip

I took the code from the LowLevelGL example, I added a floatBuffer with texture UV inside it and used PShader.set to send a PImage to the shader.

Here is the code (if you want to see it before getting the zip)

The processing code :

// Draws a triangle using low-level OpenGL calls.
import java.nio.*;

PGL pgl;
PShader sh;

int vertLoc;
int colorLoc;
int uvLoc;

float[] vertices;
float[] colors;
float[] uv;

FloatBuffer vertData;
FloatBuffer colorData;
FloatBuffer uvData;

PImage img;

void setup() {
  size(640, 360, P3D);

  img = loadImage("img.jpg");

  // Loads a shader to render geometry w/out
  // textures and lights.
  sh = loadShader("frag.glsl", "vert.glsl");

  vertices = new float[12];
  vertData = allocateDirectFloatBuffer(12);

  colors = new float[12];
  colorData = allocateDirectFloatBuffer(12);

  uv = new float[6];
  uvData = allocateDirectFloatBuffer(6);
}

void draw() {
  background(0);

  // The geometric transformations will be automatically passed 
  // to the shader.
  rotate(frameCount * 0.01, width, height, 0);

  updateGeometry();

  sh.set("texture",img);

  pgl = beginPGL();
  sh.bind();

  vertLoc = pgl.getAttribLocation(sh.glProgram, "vertex");
  colorLoc = pgl.getAttribLocation(sh.glProgram, "color");
  uvLoc = pgl.getAttribLocation(sh.glProgram, "uv");


  pgl.enableVertexAttribArray(vertLoc);
  pgl.enableVertexAttribArray(colorLoc);
  pgl.enableVertexAttribArray(uvLoc);

  pgl.vertexAttribPointer(vertLoc, 4, PGL.FLOAT, false, 0, vertData);
  pgl.vertexAttribPointer(colorLoc, 4, PGL.FLOAT, false, 0, colorData);
  pgl.vertexAttribPointer(uvLoc, 2, PGL.FLOAT, false, 0, uvData);

  pgl.drawArrays(PGL.TRIANGLES, 0, 3);

  pgl.disableVertexAttribArray(vertLoc);
  pgl.disableVertexAttribArray(colorLoc);
  pgl.disableVertexAttribArray(uvLoc);
  sh.unbind();  

  endPGL();
}

void updateGeometry() {
  // Vertex 1
  vertices[0] = 0;
  vertices[1] = 0;
  vertices[2] = 0;
  vertices[3] = 1;
  colors[0] = 1;
  colors[1] = 0;
  colors[2] = 0;
  colors[3] = 1;
  uv[0] = 0;
  uv[1] = 0;

  // Corner 2
  vertices[4] = width/2;
  vertices[5] = height;
  vertices[6] = 0;
  vertices[7] = 1;
  colors[4] = 0;
  colors[5] = 1;
  colors[6] = 0;
  colors[7] = 1;
  uv[2] = 1;
  uv[3] = 0;

  // Corner 3
  vertices[8] = width;
  vertices[9] = 0;
  vertices[10] = 0;
  vertices[11] = 1;
  colors[8] = 0;
  colors[9] = 0;
  colors[10] = 1;
  colors[11] = 1;
  uv[4] = 0;
  uv[5] = 1;

  vertData.rewind();
  vertData.put(vertices);
  vertData.position(0);

  colorData.rewind();
  colorData.put(colors);
  colorData.position(0);  

  uvData.rewind();
  uvData.put(uv);
  uvData.position(0);  
}

FloatBuffer allocateDirectFloatBuffer(int n) {
  return ByteBuffer.allocateDirect(n * Float.SIZE/8).order(ByteOrder.nativeOrder()).asFloatBuffer();
}

The vertex shader

uniform mat4 transform;

attribute vec4 vertex;
attribute vec4 color;
attribute vec2 uv;

varying vec4 vertColor;
varying vec2 textureUv;

void main() {
  gl_Position = transform * vertex;    
  vertColor = color;
  textureUv = uv;
}

The fragment shader

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

uniform sampler2D texture;
varying vec4 vertColor;
varying vec2 textureUv;


void main() {
  gl_FragColor = texture2D(texture,textureUv) * vertColor;
}

Processing crash when I set the PImage to the Shader. If I remove that line, my code works without any error (but the triangle is black because there is no texture attached to the shader)

I looked into the PShader class to try to see what's wrong in it, but I really don't understand because it's very straight forward, it should work... I send a PImage to PShader and I'm absolutly sure that my PImage is not null when I send it to the PShader, but when PShader try to use the PImage with PGL, it find a null object...

Any help is welcome ! I'm on it for maybe 6-7 hours now...

Thanks !

Tagged:

Answers

  • edited April 2015 Answer ✓

    I finally found the solution :D

    This is weird and I don't understand why it works like that (I tryed everything without any logic at the end...) .

    You have to set the texture to the shader inside the beginPGL/endPGL and AFTER the call of PShader.bind();

    It's very weird because the code located inside PShader.bind check the variables that has been set BEFORE (?!) and send them to the GPU.

    Doing just PShader.set("texture",myPImage) should only keep the reference until the next call of PShader.bind. And at the end of PShader.bind every reference are destroy... It's really really weird...

    Anyway it works :)

    Here is the working code

    void draw() {
      background(255);
    
      // The geometric transformations will be automatically passed 
      // to the shader.
      rotate(frameCount * 0.01, width, height, 0);
    
      updateGeometry();
    
    
    
      pgl = beginPGL();
      sh.bind();
    
      sh.set("image",img);
    
      vertLoc = pgl.getAttribLocation(sh.glProgram, "vertex");
      colorLoc = pgl.getAttribLocation(sh.glProgram, "color");
      uvLoc = pgl.getAttribLocation(sh.glProgram, "uv");
    
      pgl.enableVertexAttribArray(vertLoc);
      pgl.enableVertexAttribArray(colorLoc);
      pgl.enableVertexAttribArray(uvLoc);
    
      pgl.vertexAttribPointer(vertLoc, 4, PGL.FLOAT, false, 0, vertData);
      pgl.vertexAttribPointer(colorLoc, 4, PGL.FLOAT, false, 0, colorData);
      pgl.vertexAttribPointer(uvLoc, 2, PGL.FLOAT, false, 0, uvData);
    
      pgl.drawArrays(PGL.TRIANGLES, 0, 3);
    
      pgl.disableVertexAttribArray(vertLoc);
      pgl.disableVertexAttribArray(colorLoc);
      pgl.disableVertexAttribArray(uvLoc);
      sh.unbind();  
    
      endPGL();
    }
    

    I looked at the PShader code one last time before to post my message, but no, I still don't understand what's happening here...

    ...Anyway it works :D

    EDIT : it's not so weird that it works because as I said, PShader.set store the variable until the next call of PShader.bind. So, even if PShader.bind destroy the reference, my texture will be applyed during the next call. But why it doesn't work for the first pass of PShader.bind, I really have no idea. I tryed to add a delay before the first call to be sure that the PImage are loaded but it doesn't change anything. It's weird !

  • edited April 2015

    just a last example with everything you can need with LowLevelGL (texture / vertexBufferObject / buffer-draw-type)

    import java.nio.*;
    
    PGL pgl;
    PShader sh;
    PImage img0;
    PImage img1;
    int vertLoc;
    int colorLoc;
    int uvLoc;
    int data0Loc;
    int data1Loc;
    
    float[] vertices;
    float[] colors;
    float[] uv;
    float[] vertexDatas;
    
    FloatBuffer vertData;
    FloatBuffer colorData;
    FloatBuffer uvData;
    FloatBuffer vbo;
    int vertexDataId;
    
    int count = 0;
    
    boolean started = false;
    
    void setup() {
      size(640, 360, P3D);
    
      img0 = loadImage("01.jpg");
      img1 = loadImage("02.jpg");
    
      sh = loadShader("frag.glsl", "vert.glsl");
    
      vertices = new float[12];
      vertData = allocateDirectFloatBuffer(12);
    
      colors = new float[12];
      colorData = allocateDirectFloatBuffer(12);
    
      uv = new float[6];
      uvData = allocateDirectFloatBuffer(6);
    
      vertexDatas = new float[6];
      vbo = allocateDirectFloatBuffer(6);
    
      pgl = beginPGL();
      IntBuffer index = allocateDirectIntBuffer(1);
      pgl.genBuffers(1,index);
      vertexDataId = index.get(0);
      pgl.bindBuffer(PGL.ARRAY_BUFFER, vertexDataId);
      pgl.bufferData(PGL.ARRAY_BUFFER, 6 * Float.SIZE/8, vbo, PGL.STATIC_DRAW); 
      endPGL();
    
    }
    
    void draw() {
    
      background(255);
    
      rotate(frameCount * 0.01, width, height, 0);
    
      updateGeometry();
    
    
    
      pgl = beginPGL();
      sh.bind();
    
      sh.set("img0",img0);
      sh.set("img1",img1);  
    
      vertLoc = pgl.getAttribLocation(sh.glProgram, "vertex");
      colorLoc = pgl.getAttribLocation(sh.glProgram, "color");
      uvLoc = pgl.getAttribLocation(sh.glProgram, "uv");
      data0Loc = pgl.getAttribLocation(sh.glProgram, "vData0");
      data1Loc = pgl.getAttribLocation(sh.glProgram, "vData1");
    
    
      pgl.enableVertexAttribArray(vertLoc);
      pgl.enableVertexAttribArray(colorLoc);
      pgl.enableVertexAttribArray(uvLoc);
    
    
      pgl.enableVertexAttribArray(data0Loc);
      pgl.enableVertexAttribArray(data1Loc);
    
      pgl.vertexAttribPointer(vertLoc, 4, PGL.FLOAT, false, 0, vertData);
      pgl.vertexAttribPointer(colorLoc, 4, PGL.FLOAT, false, 0, colorData);
      pgl.vertexAttribPointer(uvLoc, 2, PGL.FLOAT, false, 0, uvData);
    
    
      pgl.vertexAttribPointer(data0Loc, 1, PGL.FLOAT, false, 0,vbo);
      pgl.vertexAttribPointer(data1Loc, 1, PGL.FLOAT, false, 1*Float.SIZE/8,vbo);
    
      pgl.drawArrays(PGL.TRIANGLES, 0, 3);
    
    
      pgl.bindBuffer(PGL.ARRAY_BUFFER,0);
    
      pgl.disableVertexAttribArray(vertLoc);
      pgl.disableVertexAttribArray(colorLoc);
      pgl.disableVertexAttribArray(uvLoc);
      pgl.disableVertexAttribArray(data0Loc);
      pgl.disableVertexAttribArray(data1Loc);
      sh.unbind();  
    
      endPGL();
    }
    
    void updateGeometry() {
    
      float data0 = 1;
      float data1 = 0.2;
    
      // Vertex 1
      vertices[0] = 0;
      vertices[1] = 0;
      vertices[2] = 0;
      vertices[3] = 1;
      colors[0] = 1;
      colors[1] = 0;
      colors[2] = 0;
      colors[3] = 1;
      uv[0] = 1;
      uv[1] = 1;
      vertexDatas[0] = data0;
      vertexDatas[1] = data1;
    
      // Corner 2
      vertices[4] = width/2;
      vertices[5] = height;
      vertices[6] = 0;
      vertices[7] = 1;
      colors[4] = 0;
      colors[5] = 1;
      colors[6] = 0;
      colors[7] = 1;
      uv[2] = 1;
      uv[3] = 0;
      vertexDatas[2] = data0;
      vertexDatas[3] = data1;
    
      // Corner 3
      vertices[8] = width;
      vertices[9] = 0;
      vertices[10] = 0;
      vertices[11] = 1;
      colors[8] = 0;
      colors[9] = 0;
      colors[10] = 1;
      colors[11] = 1;
      uv[4] = 0;
      uv[5] = 1;
      vertexDatas[4] = data0;
      vertexDatas[5] = data1;
    
      vertData.rewind();
      vertData.put(vertices);
      vertData.position(0);
    
      colorData.rewind();
      colorData.put(colors);
      colorData.position(0);  
    
      uvData.rewind();
      uvData.put(uv);
      uvData.position(0);  
    
      vbo.rewind();
      vbo.put(vertexDatas);
      vbo.position(0);  
    }
    IntBuffer allocateDirectIntBuffer(int n) {
      return ByteBuffer.allocateDirect(n * Integer.SIZE/8).order(ByteOrder.nativeOrder()).asIntBuffer();
    }
    FloatBuffer allocateDirectFloatBuffer(int n) {
      return ByteBuffer.allocateDirect(n * Float.SIZE/8).order(ByteOrder.nativeOrder()).asFloatBuffer();
    }
    

    The vertexShader

    uniform mat4 transform;
    
    attribute vec4 vertex;
    attribute vec4 color;
    attribute vec2 uv;
    attribute float vData0;
    attribute float vData1;
    
    varying vec4 vertColor;
    varying vec2 vertUv;
    
    void main() {
      gl_Position = transform * vertex ;
      gl_Position.xyz *= vData0;  
      vertColor = color * vec4(vData1,vData1,vData1,vData1);
      vertUv = uv;
    }
    

    and the fragmentShader

    #ifdef GL_ES
    precision mediump float;
    precision mediump int;
    #endif
    
    uniform sampler2D img0;
    uniform sampler2D img1;
    varying vec4 vertColor;
    varying vec2 vertUv;
    
    void main() {
      vec4 p0 = texture2D(img0,vertUv);
      vec4 p1 = texture2D(img1,vertUv);
      gl_FragColor = (p1 - vertColor) * p0;
      gl_FragColor.a = 1;
    }
    

    The whole code draw a single triangle on the screen :D Nice isn't it ?

    EDIT : For a mysterious reason, the first buffer cannot contain more than 4 values by vertex.

  • edited April 2015

    Same thing with an indexBuffer.

    // Draws a triangle using low-level OpenGL calls.
    import java.nio.*;
    
    PGL pgl;
    PShader sh;
    PImage img0;
    PImage img1;
    int vertLoc;
    int colorLoc;
    int uvLoc;
    int data0Loc;
    int data1Loc;
    
    float[] vertices;
    float[] colors;
    float[] uv;
    float[] vertexDatas;
    
    short[] triangleIndexs;
    
    FloatBuffer vertData;
    FloatBuffer colorData;
    FloatBuffer uvData;
    FloatBuffer vbo;
    ShortBuffer indexData;
    int vertexDataId;
    int indexDataId;
    
    int count = 0;
    int nb = 3;
    boolean started = false;
    
    void setup() {
      size(800, 600, P3D);
    
      img0 = loadImage("01.jpg");
      img1 = loadImage("02.jpg");
      // Loads a shader to render geometry w/out
      // textures and lights.
      sh = loadShader("frag.glsl", "vert.glsl");
    
    
    
      vertices = new float[12*nb];
      vertData = allocateDirectFloatBuffer(12*nb);
    
      colors = new float[12*nb];
      colorData = allocateDirectFloatBuffer(12*nb);
    
      uv = new float[6*nb];
      uvData = allocateDirectFloatBuffer(6*nb);
    
      vertexDatas = new float[6*nb];
      vbo = allocateDirectFloatBuffer(6*nb);
    
    
    
    
    
      triangleIndexs = new short[3*nb];
    
      indexData = allocateDirectShortBuffer(3*nb);
    
      int i = 0;
      triangleIndexs[i++] = 1;
      triangleIndexs[i++] = 2;
      triangleIndexs[i++] = 0;
    
    
    
      triangleIndexs[i++] = 6;
      triangleIndexs[i++] = 7;
      triangleIndexs[i++] = 8;
    
       triangleIndexs[i++] = 3;
      triangleIndexs[i++] = 4;
      triangleIndexs[i++] = 5;
    
      addTriangle(200,200);
      addTriangle(300,300);
      addTriangle(400,400);
    
      vertData.rewind();
      vertData.put(vertices);
      vertData.position(0);
    
      colorData.rewind();
      colorData.put(colors);
      colorData.position(0);  
    
      uvData.rewind();
      uvData.put(uv);
      uvData.position(0);  
    
      vbo.rewind();
      vbo.put(vertexDatas);
      vbo.position(0);  
    
      indexData.rewind();
      indexData.put(triangleIndexs);
      indexData.position(0);  
    
    
      pgl = beginPGL();
      IntBuffer index = allocateDirectIntBuffer(1);
      pgl.genBuffers(1,index);
      vertexDataId = index.get(0);
      pgl.bindBuffer(PGL.ARRAY_BUFFER, vertexDataId);
      pgl.bufferData(PGL.ARRAY_BUFFER, 6*nb * Float.SIZE/8, vbo, PGL.STATIC_DRAW);
    
      index = allocateDirectIntBuffer(1);
      pgl.genBuffers(1,index);
      indexDataId = index.get(0);
      pgl.bindBuffer(PGL.ELEMENT_ARRAY_BUFFER, indexDataId);
      pgl.bufferData(PGL.ELEMENT_ARRAY_BUFFER, 3*nb  * Short.SIZE/8, indexData, PGL.STATIC_DRAW);
      pgl.bindBuffer(PGL.ELEMENT_ARRAY_BUFFER, 0);
    
      endPGL();
    
    
    }
    
    
    
    
    
    void draw() {
    
    
      if(started == false){
         //updateGeometry();
         started =true;
    
         pgl = beginPGL();
         sh.bind();
          vertLoc = pgl.getAttribLocation(sh.glProgram, "v");
          colorLoc = pgl.getAttribLocation(sh.glProgram, "c");
          uvLoc = pgl.getAttribLocation(sh.glProgram, "uv");
          data0Loc = pgl.getAttribLocation(sh.glProgram, "vData0");
          data1Loc = pgl.getAttribLocation(sh.glProgram, "vData1"); 
          sh.unbind();
         endPGL();
         return;
      }
    
    
      background(255);
    
      // The geometric transformations will be automatically passed 
      // to the shader.
      //rotate(frameCount * 0.01, width, height, 0);
    
    
    
    
    
      pgl = beginPGL();
      sh.bind();
    
      sh.set("img0",img0);
      sh.set("img1",img1);  
      sh.set("random",random(1));
    
    
    
      pgl.enableVertexAttribArray(vertLoc);
      pgl.enableVertexAttribArray(colorLoc);
      pgl.enableVertexAttribArray(uvLoc);
    
      pgl.enableVertexAttribArray(data0Loc);
      pgl.enableVertexAttribArray(data1Loc);
    
      pgl.vertexAttribPointer(vertLoc, 4, PGL.FLOAT, false, 0, vertData);
      pgl.vertexAttribPointer(colorLoc, 4, PGL.FLOAT, false, 0, colorData);
      pgl.vertexAttribPointer(uvLoc, 2, PGL.FLOAT, false, 0, uvData);
    
    
      pgl.vertexAttribPointer(data0Loc, 1, PGL.FLOAT, false, 0,vbo);
      pgl.vertexAttribPointer(data1Loc, 1, PGL.FLOAT, false, 1*Float.SIZE/8,vbo);
    
     //                               3*2 --> 3 vertex by triangle  X   2 triangles 
     //                                       I draw only the first & the third triangle to see
     //                                       if the indexbuffer work as expected or not, and it is
      pgl.drawElements(PGL.TRIANGLES, 3*2,PGL.UNSIGNED_SHORT, indexData);
    
      pgl.bindBuffer(PGL.ELEMENT_ARRAY_BUFFER, 0);
      pgl.bindBuffer(PGL.ARRAY_BUFFER,0);
    
      pgl.disableVertexAttribArray(vertLoc);
      pgl.disableVertexAttribArray(colorLoc);
      pgl.disableVertexAttribArray(uvLoc);
      pgl.disableVertexAttribArray(data0Loc);
      pgl.disableVertexAttribArray(data1Loc);
      sh.unbind();  
    
      endPGL();
    }
    
    
    int numTriangle= 0;
    short id3 = 0;
    void addTriangle(float px,float py){
      //short id3 = (short)(numTriangle * 3);
      int id4 = numTriangle * 12;
      int id2 = numTriangle * 6;
      numTriangle++;
    
      float data0 = 1;
      float data1 = 0.2;
      /*
      //it should work like that, but for my tests, 
      //I define the buffer at the begining
      triangleIndexs[id3] = id3++;
      triangleIndexs[id3] = id3++;
      triangleIndexs[id3] = id3++;
      */
      // Vertex 1
      vertices[id4+0] = px+0;
      vertices[id4+1] = py+0;
      vertices[id4+2] = 0;
      vertices[id4+3] = 1;
      colors[id4+0] = 1;
      colors[id4+1] = 0;
      colors[id4+2] = 0;
      colors[id4+3] = 1;
      uv[id2+0] = 1;
      uv[id2+1] = 1;
      vertexDatas[id2+0] = 1;
      vertexDatas[id2+1] = 1;
    
      // Corner 2
      vertices[id4+4] = px+100/2;
      vertices[id4+5] = py+100;
      vertices[id4+6] = 0;
      vertices[id4+7] = 1;
      colors[id4+4] = 0;
      colors[id4+5] = 1;
      colors[id4+6] = 0;
      colors[id4+7] = 1;
      uv[id2+2] = 1;
      uv[id2+3] = 0;
      vertexDatas[id2+2] = 1;
      vertexDatas[id2+3] = 1;
    
      // Corner 3
      vertices[id4+ 8] = px+100;
      vertices[id4+9] = py+0;
      vertices[id4+10] = 0;
      vertices[id4+11] = 1;
      colors[id4+8] = 0;
      colors[id4+9] = 0;
      colors[id4+10] = 1;
      colors[id4+11] = 1;
      uv[id2+4] = 0;
      uv[id2+5] = 1;
      vertexDatas[id2+4] = 1;
      vertexDatas[id2+5] = 1;
    
    
    }
    
    ShortBuffer allocateDirectShortBuffer(int n) {
      return ByteBuffer.allocateDirect(n * Short.SIZE/8).order(ByteOrder.nativeOrder()).asShortBuffer();
    }
    IntBuffer allocateDirectIntBuffer(int n) {
      return ByteBuffer.allocateDirect(n * Integer.SIZE/8).order(ByteOrder.nativeOrder()).asIntBuffer();
    }
    FloatBuffer allocateDirectFloatBuffer(int n) {
      return ByteBuffer.allocateDirect(n * Float.SIZE/8).order(ByteOrder.nativeOrder()).asFloatBuffer();
    }
    
  • Nice! Thanks @tlecoz, having an alternative to PShape have been waiting at the bottom of my todo list for a long time.

  • Ahah I'm looking for it since last november :)

    I insist on that point : "For a mysterious reason, the first buffer cannot contain more than 4 values by vertex."

    Actually, you can put any values you want by vertex in the first buffer, but only the first 4 values are available in the shader, don't know why, it's like the others values were set to 0.0.

    I thought at the beginning that the first buffer had to contain the vertex position XYZW , but no, it works if I put the color in the first buffer, and the position in another...

    Anyways, everythings works as expected for the others buffers.

  • @tlecoz I'm missing something about "For a mysterious reason, the first buffer cannot contain more than 4 values by vertex."

    In this thread @codeanticode uses just one buffer for vertex and color attributes, thus 7 values per vertex.

    What is "vertexDatas" for? Is just for ilustrating how to pass additional attributes? Its funny that when removing them from the vert shader and not enabling them, thus less info per vertex, the frame rate decreases in 10fps.

  • @Kosowki : thanks for the link ! In his code, codeanticode bind the buffer before the calls of vertexAttribPointer. It's the correct way to proceed actually, but don't know why it didn't work when I tryed and I supposed that maybe Processing did that automaticly, but from what I see in the code of codeanticode, it doesn't...

    Sounds like I have to do some other tests... But I can't do it right now, I'm preparing my first photo-exhibition (next week) and I'm very busy for the moment...

    "What is "vertexDatas" for? Is just for ilustrating how to pass additional attributes?"

    Exactly :)

  • Made a quick test using @codeanticode example, using one buffer to store vertices, normals and uv coordinates, in order to compare with PShape.

    I'm rendering 3000 copies of the same 3D model for a flocking simulation. In this particular scenario, performance did not increase, moreover, the framerate drops around 2 fps using low level gl calls. Interestingly, reducing the poly count in the model from 200 to 100 vertices did not make any difference. I guess the limit here is the number of drawArrays calls per frame.

    Reading about the topic, the solution for this seems to be geometry instancing, so you only have to update the transformation matrix for each object. Has anybody looked into this? One more thing moving into my the TODO list.

  • edited April 2015

    Hello ! I think you're right when you say the problem come from the amount of call of drawArrays. You have to do what PShape do --> store every copy in the same buffer ( but that's maybe what you are doing (?) )

    I deeply looked into this actually, but there are different possibilities and every of them are very long to explain...

    With my current version of "PShape" , I can move 12000 planes with a particular motion (updated in java) for each at 60 FPS, rendered in a single call of drawArray I never tryed with more complex shape and then I don't know if the way I proceed is relevant to what you want to do...

    I think it should work too, but my current framework was done before trying to connect it to the Shader side, then I'm doing a lot of useless calculation that could be done on GPU-side, but in an ideal way, I would do that :

    1) create differents buffers :

    - X,Y,Z (contains the position of the center of your models)
    
    - vertexX,vertexY, vertexZ (the position of each vertex ; this buffer should  never be updated)
    
    - scaleX,scaleY,scaleZ (the scale of the models)
    
    - rotationX,rotationY,rotationZ (rotation of the models)
    
    - U,V (uv of the models)
    
    - and 1 indexBuffer
    

    2) in java, for each frame, compute the center position of each model relative to the center of its parent

    3) z-sort the center-position and re-order the indexBuffer only (doing that, it will re-order every "model-buffers" in the shader-code

    4) send all the datas to the GPU

    5) in the shader code, if you know how to do, recreate a matrix object from the differents buffer ; apply it to the vertex then add the position of the center (If you know how to do it, please tell me !)

    If you don't know how to do it, compute the position of each vertex relative to the center of the model. It's the exact same calculation used in step 2

    vec3 getTransformPositionWithoutMatrix(vec3 _pos,vec3 _rotation){
             //ROTATIONS
             float sx = sin(_rotation.x);
             float cx = cos(_rotation.x);
             float sy = sin(_rotation.y);
             float cy = cos(_rotation.y);
             float sz = sin(_rotation.z);
             float cz = cos(_rotation.z);
    
    
             //ROTATION X
             float xy = cx * _pos.y - sx * _pos.z;
             float xz = sx * _pos.y + cx * _pos.z;
    
             //ROTATION Y
             float yz = cy * xz - sy * _pos.x;
             float yx = sy * xz + cy * _pos.x;
    
             //ROTATION Z
             float zx = cz*yx - sz*xy;
             float zy = sz*yx + cz*xy;
    
             _pos.x = zx;
             _pos.y = zy;
             _pos.z = yz;
    
             return _pos;
         }
    

    "That's all" :)

    EDIT : it may sounds over-complex but actually it's much more simple than how matrix computation works (on the java side ; on the GPU-side matrixs are native then it's different)

    EDIT : What I said is not relevant because I just realized that I'm still using Processing pipeline instead of my custom pipeline right now and then I do not know exactly how it will react. My 12 000 planes are not using LowLevelGl for the moment and my current version should be slower than the original Processing version because of the extra computation I put inside (zSorting, colorTransform, mouseEvents, ...) . Sorry to be so unclear...

    I'm still too busy to work on it for the moment, but I will do it next week (that was my intention even if you didn't post any message actually)

Sign In or Register to comment.