Jump to content
  • Advertisement
Sign in to follow this  
stu_pidd_cow

Depth buffer isn't linear?

This topic is 2157 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've got a large environment that looks a bit like this:

 

city1_zps3fd3271c.png

 

I'm trying to see the depth buffer by using a pixel shader like this:

struct VS_OUTPUT
{
    float4 position : SV_Position;
    float2 texcoord : TEXCOORD;
    float3 normal : NORMAL;
    float4 worldposition : POSITION;
};

float4 main( in VS_OUTPUT input ) : SV_TARGET
{
    float shade = input.position.z / input.position.w;
    return float4(shade, shade, shade, 1.0f);
}

The result I get is this:

 

city2_zpsc991d62a.png

 

 

The depth appears to be very much non-linear. I was expecting something more along the lines of this:

 

city3_zpse2b1203f.png

 

I assume I'm calculating the depth wrong but every source I find says to do it this way.

 

Btw, I'm using D3D11 with  VS2012.

 

On a related note: Is there a way to see an image of the depth buffer within VS2012's graphics debugger (diagnostics)?

 

Thanks.

 

Share this post


Link to post
Share on other sites
Advertisement

EDIT: tl;dr version: I basically just tell you to do exactly what you're already doing. My bad! Sorry this post likely won't be much help to you.

 

An SV_Position input to a pixel shader is in viewport coordinates. So your input.position.z is your position in NDC coordinates transformed into viewport space. If we call the homogeneous clip space output from your vertex shader output.position.z, then the viewport space input in your pixel shader (input.position.z) is:

input.position.z = Viewport.MinDepth + output.position.z / output.position.w * (Viewport.MaxDepth - Viewport.MinDepth);

If you want to recover homogenous clip space z then just solve for output.position.z:

input.position.z = Viewport.MinDepth + output.position.z / output.position.w * (Viewport.MaxDepth - Viewport.MinDepth);
(input.position.z - Viewport.MinDepth) = output.position.z / output.position.w * (Viewport.MaxDepth - Viewport.MinDepth);
(input.position.z - Viewport.MinDepth) / (Viewport.MaxDepth - Viewport.MinDepth) = output.position.z / output.position.w;
(input.position.z - Viewport.MinDepth) * output.position.w / (Viewport.MaxDepth - Viewport.MinDepth) = output.position.z

output.position.z = (input.position.z - Viewport.MinDepth) * output.position.w / (Viewport.MaxDepth - Viewport.MinDepth)

You'll notice that you need output.position.w. I think input.position.w is actually 1.0f / output.position.w, so you can recover it from there.

 

Here's a good thread that's on topic: http://www.gamedev.net/topic/626831-pixel-shader-input-sv-position/

 

EDIT: fixed bit about getting output.position.w

Edited by Samith

Share this post


Link to post
Share on other sites

One thing that might actually be helpful to you: the value that you get from input.position.z / input.position.w is the clip space z coordinate prior to the perspective divide. This isn't the handiest coordinate to use, as its range is [-NearClip, FarClip] (or, in your case it looks like it ranges from [NearClip, -FarClip]. Most of the values in this range are outside the [0, 1] range it looks like you're expecting based on the second image you posted. Fortunately, however, this clip space z coordinate is actually linear, so if you want to linearly map it to [0, 1] that is possible with a simple map operation. z = (z + FarClip) / (NearClip + FarClip) should map a value in the range [-FarClip, NearClip] to [0, 1].

Edited by Samith

Share this post


Link to post
Share on other sites

I'm not so familiar with D3D so I may be wrong ...

 

Looking at the perspective projection matrix, I'd say that input.position's z and w are computed as

    zd := f / ( f - n ) * z - n * f / ( f - n )

    wd := z

where f and n are the distances of the far and near clipping planes. Then depth is computed by division as

    d := zd / wd = ( f - n * f / z ) / ( f - n )

so that

    d|z=n = 0

    d|z=f = +1

Obviously d is non linear with z, showing lesser resolution when z gets closer to f. This is the depth that gets stored in the depth buffer (letting a storage format adaption aside).

 

So, with input.position.w being z, a linear distance in camera space is already available. The fragment will be clipped if not inside [n,f], and it should be displayed as a gray level inside normalized [0,1], so a transformation is needed

   c := ( z - n ) / ( f - n )

which can be used as color

   ( c, c, c, 1 )

which is black for the distance being the near clipping distance and white for the distance being the far clipping distance (it is of course possible to use 1-c instead if white should be used for near distances).

 

However, the problem that arises is that is works fine when the near and far clipping plane are chosen with caution to the scene. E.g. the 1st picture in the OP shows a wide range due to bird's eye view. But the other 2 pictures are shot from street level so that the buildings restrict the sight. If the same near and far clipping distances are chosen for the latter 2 pictures as for the 1st one, then their content is concentrated in "a few" shades of dark gray, so to say.

 

The few depth viewers I know allow to select a range of depth that is then expanded to the full gray level range for viewing purposes.

Share this post


Link to post
Share on other sites

Perspective Z/W is exponential, not linear. It's not really great for visualizing depth on its own. You can convert it to a linear Z value using your projection matrix, or like haegarr suggests you can just pass w from your vertex shader which is already your linear view-space Z value. To make it into a viewable [0, 1] value you'll need to divide by the far clip distance, or do Z = (Z - NearClip) / (FarClip - NearClip).

Share this post


Link to post
Share on other sites

You can view the depth buffer by double-clicking on the texture object, which you can do by finding it in the object table or you can view the device context and find the texture bound through the depth stencil view. However doing this is pretty much useless in the new VS debugger, since the whole thing will just be white due to the non-linearity of the depth buffer. In PIX you used to be able to remap the visual range of a depth buffer to [0.9, 1.0] in order to make it viewable, but as far as I know there's no way to do this in the VS graphics debugger. It also can't view D24 format depth buffers, only D16 or D32.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!