The original Adalight sketch captures the screen (or parts of it) and sends it to an led array via Arduino.
I modified the sketch so that I could send the whole screen to my 32 x 16 array and it worked at around 22 fps.
I thought the screen capture was probably a bottleneck so I thought I'll use the syphon framework and send a 32x16 image via syphon to this sketch from a vjing app.
So.....eventually (I wasn't really a Processing user until earlier today) .......I have achieved this but the framerate is lower than it was with the screen capture at about 15 fps! I can accept that going from the GL world back to the CPU isn't the smartest/quickest thing to do but surely with such a tiny amount of data it doesn't clog things up that much?
Here is the relevant part of the code which is largely copied and pasted. Only the bits relating to the pixels array are really mine. Can anyone see any obvious mistakes?