We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I need to send some data to my fragment shader and it seems the best way to do this is through packing it into a texture and then unpacking it in the shader. Normally this is done with a floating point texture, but it seems Processing doesn't have floating point textures.
Does anyone have a good resource / basic version on how to pack data into a texture and unpack it in a shader?
Thanks
Answers
What kind of data do you need to send? There is also the set command.
Thanks clankill3r, I need to send x,y, position data. I have been using the set command to send two arrays (one for x, one for y) but I can only have a maximum of 60 elements in each array. Seems this might be a standard limit (depending on hardware) and if you need to send more data you put it into a texture somehow.
Any advice?
I think you get a quicker answer asking on a different forum with more focus on shaders. If you do, i would love to see the answer.
You can also set a texture with set(). The shaders work with normalized values.
Sorry for the delay, I think I got this sorted actually! I'll try to dig it out and post it...
Cool. This forum could use some nice shader examples.
OK so not entirely sure I got this 100% sorted but I think that is due to my lack of knowledge about shader languages and how to properly unpack the data; But I definitely managed to pack it and send it and get hold of it in the shaders!
I used the ToonShader example as the starting point for my sketch. I have a series of points with x,y positions and I pack those into a texture/PImage by 'assigning' each point to a pixel and in each pixel the x to the red value and the y to green value. The blue value isn't used. So you use the pixel array as an array to store the x,y values. However you need to map these x,y values to a new value range, so your width and height need to be mapped to between 0-255 to fit in the RGB scheme of a pixel (doesn't seem you can 'overload' the values for a each RGB component and give it a value of more than 255). Confusingly in the shader it seems 0-255 is converted to 0-1 so when you unpack the data you need to multiply it by the width or height of your sketch. I am still not sure if I have the proper way of unpacking and using the data yet
I won't go through all the stuff written in the shaders, there are a lot better more knowledgeable explanations out there than I could write and I have learnt and used them.
In the end I have created a moving blobby mass similar to voronoi patterns (but not quite) which I really like, and there does seem to be a bit of a glitch or two in when each point resets itself, but I can confirm that it runs waaaaay faster as a shader. the regular version could only run 6 points on 640x480 at 60fps, compared to 128 points on 1280x720 at 60fps, or 1920x1080 at 30fps. To run that size without shaders means 1fps and maybe not even that!
I'm sure there are better examples of passing data in textures to a vertex or fragment shader out there, (I think mine is a bit intense for a shader as I've heard loops and if/else statements slow them down a lot) but this is what I used it for and I am please with the results in speeding up my sketch and impressed by the power of shaders.
Below is the code if it is of any use to anyone, or if anyone has any improvements. All feedback welcomed:
Processing Code
Vertex Shader:
Fragment Shader
Sounds cool. I will look into it when I have time.
You could look at this (not for the physics but how to use OpenCL):
https://github.com/codeanticode/clphysics
I think that could be a more proper way.
OK thanks, I'll try and take a look soon.