We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpOpenGL and 3D Libraries › unprojecting images using depth buffer
Page Index Toggle Pages: 1
unprojecting images using depth buffer (Read 531 times)
unprojecting images using depth buffer
Nov 21st, 2008, 5:31pm
 
I'm trying to take an image that stores the depth buffer and recreate the 3d scene from it- taking this image:

http://binarymillenium.googlecode.com/svn-history/r341/trunk/processing/depthbuffer/frames/depth/depth10000.png

and from that producing this overhead view image:

http://binarymillenium.googlecode.com/svn-history/r343/trunk/processing/depthvis/frames/frame00001.jpg

My problem is that all the lines in that should be parallel or orthogonal, but they are all slightly skewed.

I set the perspective with the perspective command:

perspective(PI*0.5, float(width)/float(height),near,far);
(and for simplicity width=height)

and then extract depth with this and store it in texture:

 FloatBuffer fb = BufferUtil.newFloatBuffer(width*height);
 //set up a floatbuffer to get the depth buffer value of the mouse position

 gl.glReadPixels(0, 0, width, height, GL.GL_DEPTH_COMPONENT, GL.GL_FLOAT, fb);
 fb.rewind();
...

  int ind1 = (height-j-1)*width+i;
  float d = fb.get(ind1);
       
  d = -2*far*near/(d*(far-near) - (far+near));
 
  float distf =  1.0 - (d/far);
 
  int ind = j*width+i;
  tx.pixels[ind] = color(distf*255);

I found the  d = -2*far*near/(d*(far-near) - (far+near)) in the wikipedia entry on z-buffering, is that correct for processing and opengl?

In the second application where I take the depth buffer and transform it out of screen coordinates, I'm basically doing this where I multiply the depth by the screen coordinates to get the original xyz.

The other key thing is that the display width is twice as wide as the height, because of the PI/2 fov produces a view that's twice as wide as it is deep.  

  float yf = (float)ypixelindex/(float)tx.width-0.5;
   
   float zc = 0.5 + d*zf;
   
   int yc = (int)((1.0-d) * height);
   int xc = (int)(width/2 + d * yf * width);

And the xc and yc become coordinates in the new image where the zc value is used as a color intensity.

Can anyone spot anything obvious that I'm doing wrong?  I think it's something sort of subtle, otherwise my results wouldn't be nearly right.  Is there another way to extract depth that might avoid some problems?


Full source:
http://code.google.com/p/binarymillenium/source/browse/#svn/trunk/processing/depthbuffer

(the saito obj loader is needed there)

http://code.google.com/p/binarymillenium/source/browse/#svn/trunk/processing/depthvis


Thanks,

bm
Page Index Toggle Pages: 1