Blending very large images: performance, memory
in
Core Library Questions
•
9 months ago
I need to alpha blend 2700x1600 images at runtime. Not totally surprisingly, I'm having problems keeping my framerate up. I've tried doing the blending on an offscreen buffer, but that hasn't seemed to help much if any. Are there any general strategies for working with images this large that might apply to this situation? Something to leverage the graphics card as much as possible, perhaps?
I'm also having a bit of trouble managing memory; each of these images uncompressed is ~17MB (2700x1600x4 bytes), and there are a total of ~60 images I'll be blending (not all simultaneously!). I have strategies in mind for the memory issue that are outside the scope of this question, but I include it here in case there is a clever way to balance memory usage between the computer's memory and the graphics card.
Based on my digs into the source, it does seem that Processing (PGraphicsOpenGL specifically) is using JOGL internally for PApplet.image(), so that optimization at least is already happening.
Advice in raw JOGL is not as nice as Processing code, but I can probably thrash my way through it if there's a JOGL route.
Basic strategy (very simplified code) as of right now is:
- PImage a = loadImage("imageA.png");
- PImage b = loadImage("imageB.png");
- PGraphics buffer = createGraphics(width, height, OPENGL);
- void draw () {
- buffer.beginDraw();
- buffer.tint(255, 200);
- buffer.image(a, 0, 0);
- buffer.tint(255, 100);
- buffer.image(b, 0, 0);
- buffer.endDraw();
- image(buffer, 0, 0);
- }
1