FAQ
Cover
This is the archive Discourse for the Processing (ALPHA) software.
Please visit the new Processing forum for current information.

   Processing 1.0 _ALPHA_
   Programming Questions & Help
   Programs
(Moderators: fry, REAS)
   RAM vs tape
« Previous topic | Next topic »

Pages: 1 
   Author  Topic: RAM vs tape  (Read 389 times)
benelek

35160983516098 WWW Email
RAM vs tape
« on: Feb 19th, 2003, 3:47pm »

a general programming strategy question.
 
is there a good way to access only the relevant objects in some arrangement without scanning through all of them and comparing their eligibility?
 
for instance, suppose i have a whole lot of objects arranged in some random way on the screen, and i want to determine which is the closest one to the cursor. is there a faster way to do it than manually scrolling thru each of them, determining their distances from the cursor and comparing results? (the question of speed is relevant because of a huge number of things arranged on the screen).
 
this could also be applied to GUI methods: if i have a whole lot of menu items, is there a faster way to determine which one has to be hilighted, than comparing the mouse position with the rectangular bounds of each menu item?
 
thanks,
 
-jacob
 
fry


WWW
Re: RAM vs tape
« Reply #1 on: Feb 19th, 2003, 6:09pm »

spatial subdividing, octrees, and similar algorithms are all about this problem.
 
the general idea is that you break up your screen into regions, then figure out what objects are in which region (which can be done in linear time). then, you know only to compare against things within regions that are themselves within a certain range of the mouse (or whatever you're determining proximity to). this reduces the number of objects you need to look at, to do the slower per-object comparisons.
 
the performance method is furthered by doing this as a hierarchical set of subdivisions like octrees.
 
these sort of methods are also used for things like astrophysics where you're trying to simulate billions of objects as particles, and figure out how they're affecting one another. the barnes-hut algorithm is a specific one, check google for the paper and the code.
 
in lots of cases that can be overkill.. there are some simpler things you can do to prune things down. for instance, in calculating distance, you lose time 1) traversing the list and 2) doing the square and square root computations on the objects.  
 
float distX = abs(mouseX - particleX[i]);
float distY = abs(mouseY - particleY[i]);
 
// this is the one that hurts most
float dist = sqrt(distX*distX + distY*distY);
 
so rather than doing that dist calculation, you can see if distX and distX are even within the range of plausibility. i.e. if you're wondering whether it's within a range of 20, you can just do this:
 
if ((mouseX - particleX[i] < -20) ||  
    (mouseX - particleX[i] > 20) ||
    (mouseY - particleY[i] > 20) ||
    (mouseY - particleY[i] < -20)) continue;
 
before doing any of the other calculation.. this winds up being faster because no floats are allocated, no numbers assigned, it just does the 'if' in temporary memory in the vm.
 
Pages: 1 

« Previous topic | Next topic »