• Advertisement
Sign in to follow this  

Creating a Depth Texture

This topic is 2231 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm attempting to create a depth texture. There seems to be more than one way to go about this. Here is the shader:


float4x4 g_mWorldViewProj; //Vertex world * light view * light projection

struct VERTEX_INPUT
{
float3 Position : POSITION0;
};

struct VERTEX_OUTPUT
{
float4 Position : POSITION0;
float Depth : TEXCOORD0;
};

VERTEX_OUTPUT vs_main( VERTEX_INPUT Input )
{
VERTEX_OUTPUT Output;

Output.Position = mul( float4(Input.Position, 1.0), g_mWorldViewProj).xyz;

Output.Depth = Output.Position.z;

return Output;
}

// our pixel shader
float4 ps_main( VERTEX_OUTPUT Input ) : COLOR0
{
return float4(Input.Depth, Input.Depth, Input.Depth, 1.0);
}


Simple enough...
Unless I am really close to geometry I only see white. When I get really close I see the white to black gradient. I'm followiwing this guide btw:
http://www.gamedev.n...d-shadows-r2193

What am I doing wrong?

EDIT: Btw, near and far values are set to allow me to see far. If using a default shader I can see everything normal. Near = 0.1, Far = 1000.0.

Share this post


Link to post
Share on other sites
Advertisement
Well you definitely don't want to store just z. If you want to store exactly what's stored in a depth buffer, the you want to interpolate both z and w and then output z/w from your pixel shader. However that's not necessarily the best way to store depth to a texture, depending on what you're doing. What exactly are you going to use this depth texture for?

Share this post


Link to post
Share on other sites
I'm attempting to project a texture onto the first surface it touches, similar to shadow mapping.

This has the same results...

float4x4 g_mWorldViewProj; //Vertex world * light view * light projection

struct VERTEX_INPUT
{
float3 Position : POSITION0;
};

struct VERTEX_OUTPUT
{
float4 Position : POSITION0;
float2 Depth : TEXCOORD0;
};

VERTEX_OUTPUT vs_main( VERTEX_INPUT Input )
{
VERTEX_OUTPUT Output;

Output.Position = mul( float4(Input.Position, 1.0), g_mWorldViewProj);

Output.Depth = Output.Position.zw;

return Output;
}

// our pixel shader
float4 ps_main( VERTEX_OUTPUT Input ) : COLOR0
{
float fGrey = Input.Depth.x / Input.Depth.y;
return float4(fGrey, fGrey, fGrey, 1.0);
}

Share this post


Link to post
Share on other sites
Your depth looks pure white because your display does not have sufficient bit depth to display the contents of the depth buffer. The depth buffer is usually 16-24 bits but that is black to white instead of rgb which means you can only display the first 8 bits in the buffer and everything above that is displayed as pure white. The depth buffer should be fine for reading though, if you really need to display it you could try dividing the depth value by 2 for 16 bits or 3 for 24 bits before you display them, that should compress it to the visible bit-range.

Share this post


Link to post
Share on other sites

Your depth looks pure white because your display does not have sufficient bit depth to display the contents of the depth buffer. The depth buffer is usually 16-24 bits but that is black to white instead of rgb which means you can only display the first 8 bits in the buffer and everything above that is displayed as pure white.


That's not really at all how it works. A z/w depth buffer will appear white for most of the visible depth range because z/w is non linear, and most of the depth range ends up getting mapped to values > 0.9 with a perspective projection. A common way to make it better for visualization is to just remap [0.9 - 1.0] to [0.0 1.0] when displaying it, which you can do with saturate((depth - 0.9) * 10).

If you want, you can also just store a depth value that's linear and it will display correctly. It will also have a more even distribution of precision throughout the visible depth range. A common value to use is ViewSpaceZ / FarClip, which you could do in that shader by setting fGrey = Input.Depth.y / FarClip.

Share this post


Link to post
Share on other sites
Also, if you're using a floating-point depth buffer (DXGI_FORMAT_R32F/R16F and friends, assuming D3D10+) you can actually flop the near and far planes (near plane has what you set as far plane distance, and vice versa) and switch the test direction (depth test becomes GREATER_EQUAL instead of LESS_EQUAL, etc.) for some further precision improvements. This will *NOT* work for UNORM-based formats, as it exploits the mechanics of how floating-point numbers are represented-- some unneeded extra precision is shifted away from where the camera is and ends up 'spread out' over the remaining depth range.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement