We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexDiscussionExhibition › Performance Improvements for Pixel-level Rendering
Page Index Toggle Pages: 1
Performance Improvements for Pixel-level Rendering (Read 1086 times)
Performance Improvements for Pixel-level Rendering
Jul 20th, 2005, 8:02pm
 
Well, I was playing around with metaballs lately, and came up with the following (extremely simple) program:
http://www.rpi.edu/~laporj2/art/dynamic/processing/metaballs/original/

It works great and is pretty simple -- does exactly what I want it to. However, it's computationally expensive. It's just a tad slow on my machine (1.4Ghz Pentium M), and thus I expect it'd be slow on other's machines as well. So, I wanted to find some ways of speeding it up by reducing the computational complexity of it.

The first way I did so is simple -- simply select to render every other pixel each frame (similar to how interlacing is done on a TV screen). I took the next step and made it render a checkerboard instead of horizontal lines, since A) it makes it render only 1/4 as much (4x effective speed increase), but also B) it looks really nice.

http://www.rpi.edu/~laporj2/art/dynamic/processing/metaballs/dithered/

The main change here is:

Code:
for(int y = f % 2; y < 160; y += 2)
for(int x = (f / 2) % 2; x < 160; x += 2)
... render ...

You'll note that I increment x and y each by 2 instead of 1, and that I vary the pixel they start on.

And while that's all well and good, I wanted a rendering mechanism that gave the "proper" results, without any artifacts (such as the blur caused by the dithering method). To this effect, I made an adaptive rendering scheme -- check each 8x8 block. If it's all within the surface, render it black; otherwise, recursively check the 4 half-sized squares within it. This renders almost the exact same quality, with a significant drop in computing effort. The only downside is that small details (less than 8x8) are missed and ignored. You may see some of these when two blobs are close, but don't want to stop touching. -- You'll notice that the artifacts are, indeed, visible; but are, in fact, very minor.

http://www.rpi.edu/~laporj2/art/dynamic/processing/metaballs/adaptive/
(click the mouse to see the rendering pattern used)

Code:
void adapt (int x, int y, int r) {
boolean a = fieldAt(x, y), b = fieldAt(x + r, y), c = fieldAt(x, y + r), d = fieldAt(x + r, y + r);

if(a || b || c || d) {
if(a && b && c && d) rect(x, y, r, r);
else if(r > 1) { int hr = r / 2; adapt(x, y, hr); adapt(x + hr, y, hr); adapt(x, y + hr, hr); adapt(x + hr, y + hr, hr); }
}
}


This could be further simplified by caching the pixels we've checked so far -- since there is a high degree of overlap across the blocks. I havn't tried this yet, but intend to do so soon.

The adaptive technique works very well when you have something that needs to be computed for each pixel, and each chunk of computation has a high overhead: raytracing and raycasting are the immediate examples that come to mind. So if anyone is trying to do such things, perhaps the technique can aid you.

I'm planning to attempt a particularly complex 3d realtime demo with metaballs, and so I'm finding the the more methods I can come up with to reduce the amount of computation I need, the better.
Page Index Toggle Pages: 1