We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I made a tower defense game for a game jam in processing3
The path finding for enemies is all done in a shader on the gpu. I don't have access to an AMD machine, but I'm getting reports of weird game-breaking stuff happening with AMD cards.
In one instance, I've isolated a portion of my code for benchmarking to just .loadPixels() from a relatively small sized PImage, this code runs fast, well over 500fps on my machine, yet slows a decently powerful AMD card to an unplayable 15fps. The loadpixels call is 2-3milliseconds on my machine, but 60+ on an AMD machine.
I've noticed many unexpected differences between graphics cards and how they handle shaders, even among NVidia cards.
For example, if a frag shader doesn't set a gl_FragColor, my GTX 1070 interprets this as it should be transparent. My friend's
GTX 580 seems to set it to a random-ish color.
Any hints as to how I can make my game work evenly for all platforms?
Answers
Hello
NVidia is more artsy about specifications : )
as exampel: pathDirection.glsl
+ non-constant global initializer not allowed vec2 nv = (vec2(0.0,n));
+ No texelFetch
+ if (enemyLoc.x>1/256.0) int to float ? No.
ect. ect.
use somekind of validator, glsl debugger
eg.
https://github.com/KhronosGroup/glslang/releases
http://www.nvidia.com/object/nsight.html (more opengl)
https://www.khronos.org/opengl/wiki/Debugging_Tools
quick overview:
https://github.com/mattdesl/lwjgl-basics/wiki/GLSL-Versions
Good Luck