Render Depth Buffer ?

This topic is 2144 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi guys,

I want to experiment with rendering techniques and I need the depth buffer on a surface. From what I gather the best way is to do a pass with a specialized pixel shader; I should know how to make that shader but I don't.

I think it should maybe be position.z/position.w, does that sound right ?

Thanks.

Share on other sites
And maybe a × 0.5 + 0.5.
float fDepth = (pos.z / pos.w) * 0.5f + 0.5f;

L. Spiro Edited by L. Spiro

Share on other sites

Hi, everything comes out solid white.

// vertex shader

{
VS_OUTPUT output;

output.position=mul(input.position,WorldViewProjection);
output.wpos=output.position;

return output;
}

{
float depth=(input.wpos.z/input.wpos.w)*0.5f+0.5f;

return float4(depth,depth,depth,1);
}

Thanks.

Share on other sites

I think that *0.5f + 0.5f is unnecessary since the wpos.z value is already in 0..1 range.

Otherwise, as far as I know, the w-division is done after vertex shader which means that division by wpos.w in the pixel shader is same as dividing by one.

So if you ouput just input.wpos.z you'll get non-linear depth to your render target. This is enough for shadow mapping for example.

If you pass a view space position to your pixel shader (ie. multiply the vertex by world and view matrices) you can store a linear depth value (you'll need to divide the view space z with the far z plane value to scale it back to 0..1).

Cheers!

Edited by kauna

Share on other sites

Yes, z/w is [0, 1] in DirectX. There's no need to range compress it.

Edited by MJP

Share on other sites

Otherwise, as far as I know, the w-division is done after vertex shader which means that division by wpos.w in the pixel shader is same as dividing by one.

w division will happen "behind the scene" right after the vs but before pixel shader right?

But i thought it only affects output POSITION semantic and not other "user" (like wpos in his case) which might be stored in like TEXCOORD?

Share on other sites

I think that you are right about the w division. I mean, if you store your position in other than position semantic.

But is there an advantage by doing the division by yourself in the pixel shader?

Cheers!

Edited by kauna

Share on other sites

Hi guys, thanks for helping.

I can only get the linear depth working (which is great), doing what kauna described. I'm still confused why simply outputting the z-value doesn't work:

struct VS_INPUT
{
float4 position : POSITION;
};

struct VS_OUTPUT
{
float4 position : POSITION;
float depth : TEXCOORD;
};

{
VS_OUTPUT output;
output.position=mul(input.position,WorldViewProjection);
output.depth=output.position.z/output.position.w;
return output;
}

{
return float4(input.depth,input.depth,input.depth,1);
}

It's just like the Microsoft example.

Everything is practically white, could it be my view and projection: top-down view with projection (znear=1, zfar=200) ?

Thanks again.

Share on other sites

Yes, z/w is [0, 1] in DirectX. There's no need to range compress it.

Yes, I was remembering my code but I forgot that in my code I was outputting the vertex positions manually and not as a POSITION semantic.
My mistake.

L. Spiro

Share on other sites

z/w is exponential, not linear. Take a look at what a graph of z/w looks like for your near and far clip planes:

[attachment=19096:save.png]

http://fooplot.com/plot/ol3rhbz7ty

The depth buffer will reach a value of 0.9 at around 10 units away from the camera, which is why it will just look white when you visualize it. If you want to visualize the depth, you either need to linearize it or rescale it in some way that makes it easier to see the contents (for instance, by doing saturate((depth - 0.9f) * 10.0f)).

• Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 16
• 9
• 15
• 9
• 11