We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpOpenGL and 3D Libraries › 3D to screen transformation pipeline
Page Index Toggle Pages: 1
3D to screen transformation pipeline (Read 680 times)
3D to screen transformation pipeline
Jul 15th, 2006, 8:39pm
 
Despite how many posts have been made addressing various aspects of this, I have yet to see a clear and full explanation of exactly how a 3d point is transformed into a 2d screen point plus zbuffer value.  There seems to be some mystery about this, so I have gathered some info and tried to reverse engineer the pipeline myself.

Here are my findings so far:

You can grab the projection and modelview matrices using g.projection and g.modelview respectively.  Use the form g.projection.m00, g.projection.m01, and so on to access the values in those matrices. Given a point V = (x y z 1) we get a point V' = (x' y' z' w') by multiplying V'=P*M*V.  Now divide x', y', and z' by w' to get xx=x'/w', yy=y'/w', and zz=z'/w'.

The screen coordinates xs and ys (in pixels) are obtained by framing xx and yy into the viewport.  In Processing, this means that (0, 0) is the top left corner and the viewport dimensions are width x height.  You can think of xx and yy as being in the range [-1, 1] if the point is on the screen.  Thus, xs=(width/2)+xx*(width/2) and ys=(height/2)+yy*(height/2).

In theory, zz should be the zbuffer value (stored at g.zbuffer[ys*width+xs]).  However, based on my observations this doesn't seem to be so.  In OpenGL, there is an additional step that clamps zbuffer values to the range set by glDepthRange(near, far).  It works something like zs=(far+near)/2+zz*(far-near)/2.  I tried to find out the near and far values for Processing using g.cameraNear and g.cameraFar, but these don't seem to match with the results I get from screenZ() or from looking at the projection matrix (see matrix R at http://fly.cc.fer.hr/~unreal/theredbook/appendixg.html).

Could someone please tell me the following information:
1) How EXACTLY is the zbuffer value in Processing computed?  Please list the full process without skipping a step.  My guess is that something happens to zz, but what?
2) If near and far are needed in this computation, how do we query the near and far clipping plane values?
Re: 3D to screen transformation pipeline
Reply #1 - Jul 16th, 2006, 6:18am
 
Okay, I figured it out by reading the source code here http://dev.processing.org/source/index.cgi/trunk/processing/core/PGraphics3.java?rev=2286&view=markup and checking the implementation of the screenZ() function.

zs = 1/2 + zz/2 = (zz+1)/2
Page Index Toggle Pages: 1