NVidia/AMD differences?

edited April 2018 in GLSL / Shaders

I made a tower defense game for a game jam in processing3


The path finding for enemies is all done in a shader on the gpu. I don't have access to an AMD machine, but I'm getting reports of weird game-breaking stuff happening with AMD cards.

In one instance, I've isolated a portion of my code for benchmarking to just .loadPixels() from a relatively small sized PImage, this code runs fast, well over 500fps on my machine, yet slows a decently powerful AMD card to an unplayable 15fps. The loadpixels call is 2-3milliseconds on my machine, but 60+ on an AMD machine.

I've noticed many unexpected differences between graphics cards and how they handle shaders, even among NVidia cards. For example, if a frag shader doesn't set a gl_FragColor, my GTX 1070 interprets this as it should be transparent. My friend's
GTX 580 seems to set it to a random-ish color.

Any hints as to how I can make my game work evenly for all platforms?


Sign In or Register to comment.