Creating a Depth of Field Shader - How to associate depth information with a PGraphics object?

edited January 2014 in GLSL / Shaders


I'm trying to recreate a Depth of Field Shader loosely based on this file:

Depth OF Field Shader

Looking at the uniforms it receives two sampler2d pointers to texture files, one that contains the rendered image and one that contains depth information.

I've tried to implement this shader (with a simplified non-bokeh blur kernel) using the sketch and fragment shader below. I'm using a PGraphic object to render the scene to a buffer which gets passed (I think) to the shader as the sampler2D texture file:


PShader depthOfFieldShader;
PGraphics source;
int dimensions;
PShape planeShape;

void setup() {
  size(1000, 600, P3D);
  depthOfFieldShader = loadShader("depthOfFieldShader.glsl");

  dimensions = 100;

  planeShape = createShape();
  planeShape.vertex(-dimensions, -dimensions);
  planeShape.vertex(dimensions, -dimensions);
  planeShape.vertex(dimensions, dimensions);
  planeShape.vertex(-dimensions, dimensions);

  source = createGraphics(width, height, P3D);

void draw() {

  source.translate(width/2, height/2);
  source.pointLight(255, 255, 255, 200, 100, 200);
  source.rotateX(frameCount /30.0);
  source.rotateY(frameCount /30.0);
  planeShape.setFill(0, color(0, 0, 255));
  planeShape.setFill(1, color(0, 0, 255));
  planeShape.setFill(2, color(0, 0, 255));
  planeShape.setFill(3, color(0, 0, 255));


  image(source, 0, 0);



uniform sampler2D texture;

varying vec4 vertColor;

varying vec4 vertTexCoord;

void main(void){

    float depth = gl_FragCoord.z / gl_FragCoord.w;

    float sampleOffset = depth/100000.0;

    vec4 col = vec4(0.0);
    col += texture2D(texture, + vec2(-sampleOffset,-sampleOffset)) * 1.0;
    col += texture2D(texture, + vec2( 0.0         ,-sampleOffset)) * 2.0;
    col += texture2D(texture, + vec2( sampleOffset,-sampleOffset)) * 1.0;

    col += texture2D(texture, + vec2(-sampleOffset, 0.0     )) * 2.0;
    col += texture2D(texture, + vec2( 0.0         , 0.0     )) * 4.0;
    col += texture2D(texture, + vec2( sampleOffset, 0.0     )) * 2.0;

    col += texture2D(texture, + vec2( sampleOffset, sampleOffset)) * 1.0;
    col += texture2D(texture, + vec2( 0.0         , sampleOffset)) * 2.0;
    col += texture2D(texture, + vec2(-sampleOffset, sampleOffset)) * 1.0;

    col /= 16.0;

    gl_FragColor = col;


As you can see, I'm trying to determine the depth information using gl_FragCoord.z / gl_FragCoord.w which I've used successfully previously in a fogging shader. The problem is that the shader is called just prior to displaying the source buffer PGraphic. As this is displayed as a flat image at a z depth of 0 then all the z depth information associated with the rotating square is lost and everything gets blurred equally.

So, my question is... is there any way to capture the depth information and pass it through to the shader using the depth buffer or a depth texture or similar? I know this is possible in theory as this (impressive) project claims to have used a modified version of this same shader in Processing.

Generating Utopia

Any help or pointers gratefully received



  • You could read the contents of the depth buffer using glReadPixels(), but you would need to do some additional low-level calls to properly get a hold of the framebuffer the PGraphics object renders to...

    Maybe one alternative could be to do a second-pass render where you draw the same geometry (but without any lights) into another PGraphics, with a color shader that simply outputs the depth value:

    uniform float maxDepth;
    void main(void){ 
      float depth = gl_FragCoord.z / gl_FragCoord.w; 
      gl_FragColor = vec4(vec3(depth/maxDepth), 1); 

    Then, you could read this PGraphics with the depth information as a second sampled2D in your original shader and use it to control the blur algorithm.

  • edited January 2014

    @Poersch showed both solutions:

    Getting the depth buffer (only if anti-aliasing is not used)

    Shader to store depth information

    I tried both on Processing 1, and surprisingly the shader solution proved to be faster for HD resolutions.

  • @kosowski, thanks for pointing to these posts, I will look a bit more into the depth-buffer read approach.

  • Great @codeanticode. I used the solution posted here but looks like converting and copying each pixels is slow. Having easy access to the depth buffer would be great for adding some post processing effects.

    @mattg73 I used that shader, wich was easy to adapt once you get the depth buffer. The are some screen captures on my site

  • edited January 2014

    @kosowski, @codeanticode: Yes, in case you have low polygon counts but a high resolution canvas, the shader based method will easily outperform my getDepthValue() function. But if you are dealing with a low resolution and very high polygon counts (a few million) it could be the other way around.

    Anyhow, both solutions are kind of a workaround. While the first is designed for single pixel access, the second requires you to render a scene twice. As @codeanticode mentioned, you could access the depth buffer with low level GL calls, but wouldn't it be great if there was a more "Processing way" of doing this?

    @mattg73: In case your project doesn't require semi-transparent textures/geometry you could write a custom shader that utilizes your PGraphic's alpha channel as depth buffer - an 8bit low res depth buffer - but it should work.

    Edit: Looks great @kosowski!

  • @Poersch, I don't think there will be a "Processing way" of accessing the depth buffer other than via low-level GL calls, since no additional API is planned for 2.x. Reading the depth buffer is a low-level operation anyways, only few advanced users (and library developers) would worry about that. See my comments to your other post

  • edited January 2014

    In the past I have used the shader method to get a depth map as well. However the problem with this method is, as mentioned, that it requires a second render pass. For this reason - and because I am interested in deffered shading - I have been thinking about Multiple Render Targets (MRT). Haven't tried implementing it yet though, so don't know if this is currently easy to implement by extension. Might be interesting to investigate though...

  • Thanks all for the suggestions, nice to have alternatives to try and based on such expert input!

    My scene has a high poly count and I'd like to achieve a full screen display where possible so I'm going to try both routes but it's encouraging to see the performance you achieved @kosowski. Nice job combining those shaders!

    I'll share any successful results!

  • So I am basically on the same path as you @mattg73. only a few months behind :) Did you manage to get the shader from Martins Upitis to work in processing? If so, care to share the results?

Sign In or Register to comment.