We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello !
I would like to set a PImage to a "low-level-shader" but it doesn't work...
Because it's boring to test a code with shaders, I put my processing project in a zip, and send it here
beginfill.com/processing/LowLevelGL_test.zip
I took the code from the LowLevelGL example, I added a floatBuffer with texture UV inside it and used PShader.set to send a PImage to the shader.
Here is the code (if you want to see it before getting the zip)
The processing code :
// Draws a triangle using low-level OpenGL calls.
import java.nio.*;
PGL pgl;
PShader sh;
int vertLoc;
int colorLoc;
int uvLoc;
float[] vertices;
float[] colors;
float[] uv;
FloatBuffer vertData;
FloatBuffer colorData;
FloatBuffer uvData;
PImage img;
void setup() {
size(640, 360, P3D);
img = loadImage("img.jpg");
// Loads a shader to render geometry w/out
// textures and lights.
sh = loadShader("frag.glsl", "vert.glsl");
vertices = new float[12];
vertData = allocateDirectFloatBuffer(12);
colors = new float[12];
colorData = allocateDirectFloatBuffer(12);
uv = new float[6];
uvData = allocateDirectFloatBuffer(6);
}
void draw() {
background(0);
// The geometric transformations will be automatically passed
// to the shader.
rotate(frameCount * 0.01, width, height, 0);
updateGeometry();
sh.set("texture",img);
pgl = beginPGL();
sh.bind();
vertLoc = pgl.getAttribLocation(sh.glProgram, "vertex");
colorLoc = pgl.getAttribLocation(sh.glProgram, "color");
uvLoc = pgl.getAttribLocation(sh.glProgram, "uv");
pgl.enableVertexAttribArray(vertLoc);
pgl.enableVertexAttribArray(colorLoc);
pgl.enableVertexAttribArray(uvLoc);
pgl.vertexAttribPointer(vertLoc, 4, PGL.FLOAT, false, 0, vertData);
pgl.vertexAttribPointer(colorLoc, 4, PGL.FLOAT, false, 0, colorData);
pgl.vertexAttribPointer(uvLoc, 2, PGL.FLOAT, false, 0, uvData);
pgl.drawArrays(PGL.TRIANGLES, 0, 3);
pgl.disableVertexAttribArray(vertLoc);
pgl.disableVertexAttribArray(colorLoc);
pgl.disableVertexAttribArray(uvLoc);
sh.unbind();
endPGL();
}
void updateGeometry() {
// Vertex 1
vertices[0] = 0;
vertices[1] = 0;
vertices[2] = 0;
vertices[3] = 1;
colors[0] = 1;
colors[1] = 0;
colors[2] = 0;
colors[3] = 1;
uv[0] = 0;
uv[1] = 0;
// Corner 2
vertices[4] = width/2;
vertices[5] = height;
vertices[6] = 0;
vertices[7] = 1;
colors[4] = 0;
colors[5] = 1;
colors[6] = 0;
colors[7] = 1;
uv[2] = 1;
uv[3] = 0;
// Corner 3
vertices[8] = width;
vertices[9] = 0;
vertices[10] = 0;
vertices[11] = 1;
colors[8] = 0;
colors[9] = 0;
colors[10] = 1;
colors[11] = 1;
uv[4] = 0;
uv[5] = 1;
vertData.rewind();
vertData.put(vertices);
vertData.position(0);
colorData.rewind();
colorData.put(colors);
colorData.position(0);
uvData.rewind();
uvData.put(uv);
uvData.position(0);
}
FloatBuffer allocateDirectFloatBuffer(int n) {
return ByteBuffer.allocateDirect(n * Float.SIZE/8).order(ByteOrder.nativeOrder()).asFloatBuffer();
}
The vertex shader
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 uv;
varying vec4 vertColor;
varying vec2 textureUv;
void main() {
gl_Position = transform * vertex;
vertColor = color;
textureUv = uv;
}
The fragment shader
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform sampler2D texture;
varying vec4 vertColor;
varying vec2 textureUv;
void main() {
gl_FragColor = texture2D(texture,textureUv) * vertColor;
}
Processing crash when I set the PImage to the Shader. If I remove that line, my code works without any error (but the triangle is black because there is no texture attached to the shader)
I looked into the PShader class to try to see what's wrong in it, but I really don't understand because it's very straight forward, it should work... I send a PImage to PShader and I'm absolutly sure that my PImage is not null when I send it to the PShader, but when PShader try to use the PImage with PGL, it find a null object...
Any help is welcome ! I'm on it for maybe 6-7 hours now...
Thanks !
Answers
I finally found the solution :D
This is weird and I don't understand why it works like that (I tryed everything without any logic at the end...) .
You have to set the texture to the shader inside the beginPGL/endPGL and AFTER the call of PShader.bind();
It's very weird because the code located inside PShader.bind check the variables that has been set BEFORE (?!) and send them to the GPU.
Doing just PShader.set("texture",myPImage) should only keep the reference until the next call of PShader.bind. And at the end of PShader.bind every reference are destroy... It's really really weird...
Anyway it works :)
Here is the working code
I looked at the PShader code one last time before to post my message, but no, I still don't understand what's happening here...
...Anyway it works :D
EDIT : it's not so weird that it works because as I said, PShader.set store the variable until the next call of PShader.bind. So, even if PShader.bind destroy the reference, my texture will be applyed during the next call. But why it doesn't work for the first pass of PShader.bind, I really have no idea. I tryed to add a delay before the first call to be sure that the PImage are loaded but it doesn't change anything. It's weird !
just a last example with everything you can need with LowLevelGL (texture / vertexBufferObject / buffer-draw-type)
The vertexShader
and the fragmentShader
The whole code draw a single triangle on the screen :D Nice isn't it ?
EDIT : For a mysterious reason, the first buffer cannot contain more than 4 values by vertex.
Same thing with an indexBuffer.
Nice! Thanks @tlecoz, having an alternative to PShape have been waiting at the bottom of my todo list for a long time.
Ahah I'm looking for it since last november :)
I insist on that point : "For a mysterious reason, the first buffer cannot contain more than 4 values by vertex."
Actually, you can put any values you want by vertex in the first buffer, but only the first 4 values are available in the shader, don't know why, it's like the others values were set to 0.0.
I thought at the beginning that the first buffer had to contain the vertex position XYZW , but no, it works if I put the color in the first buffer, and the position in another...
Anyways, everythings works as expected for the others buffers.
@tlecoz I'm missing something about "For a mysterious reason, the first buffer cannot contain more than 4 values by vertex."
In this thread @codeanticode uses just one buffer for vertex and color attributes, thus 7 values per vertex.
What is "vertexDatas" for? Is just for ilustrating how to pass additional attributes? Its funny that when removing them from the vert shader and not enabling them, thus less info per vertex, the frame rate decreases in 10fps.
@Kosowki : thanks for the link ! In his code, codeanticode bind the buffer before the calls of vertexAttribPointer. It's the correct way to proceed actually, but don't know why it didn't work when I tryed and I supposed that maybe Processing did that automaticly, but from what I see in the code of codeanticode, it doesn't...
Sounds like I have to do some other tests... But I can't do it right now, I'm preparing my first photo-exhibition (next week) and I'm very busy for the moment...
"What is "vertexDatas" for? Is just for ilustrating how to pass additional attributes?"
Exactly :)
Made a quick test using @codeanticode example, using one buffer to store vertices, normals and uv coordinates, in order to compare with PShape.
I'm rendering 3000 copies of the same 3D model for a flocking simulation. In this particular scenario, performance did not increase, moreover, the framerate drops around 2 fps using low level gl calls. Interestingly, reducing the poly count in the model from 200 to 100 vertices did not make any difference. I guess the limit here is the number of drawArrays calls per frame.
Reading about the topic, the solution for this seems to be geometry instancing, so you only have to update the transformation matrix for each object. Has anybody looked into this? One more thing moving into my the TODO list.
Hello !
I think you're right when you say the problem come from the amount of call of drawArrays. You have to do what PShape do --> store every copy in the same buffer ( but that's maybe what you are doing (?) )I deeply looked into this actually, but there are different possibilities and every of them are very long to explain...With my current version of "PShape" , I can move 12000 planes with a particular motion (updated in java) for each at 60 FPS, rendered in a single call of drawArray I never tryed with more complex shape and then I don't know if the way I proceed is relevant to what you want to do...I think it should work too, but my current framework was done before trying to connect it to the Shader side, then I'm doing a lot of useless calculation that could be done on GPU-side, but in an ideal way, I would do that :1) create differents buffers :
2) in java, for each frame, compute the center position of each model relative to the center of its parent3) z-sort the center-position and re-order the indexBuffer only (doing that, it will re-order every "model-buffers" in the shader-code4) send all the datas to the GPU5) in the shader code, if you know how to do, recreate a matrix object from the differents buffer ; apply it to the vertex then add the position of the center (If you know how to do it, please tell me !)If you don't know how to do it, compute the position of each vertex relative to the center of the model. It's the exact same calculation used in step 2"That's all" :)
EDIT : it may sounds over-complex but actually it's much more simple than how matrix computation works (on the java side ; on the GPU-side matrixs are native then it's different)EDIT : What I said is not relevant because I just realized that I'm still using Processing pipeline instead of my custom pipeline right now and then I do not know exactly how it will react. My 12 000 planes are not using LowLevelGl for the moment and my current version should be slower than the original Processing version because of the extra computation I put inside (zSorting, colorTransform, mouseEvents, ...) . Sorry to be so unclear...
I'm still too busy to work on it for the moment, but I will do it next week (that was my intention even if you didn't post any message actually)