Linear depth buffer for rendering shadows

Started by
7 comments, last by Dawoodoz 12 years, 10 months ago
I have created a depth buffer with a shader resource view to use for only depth based shadows.
I render the world using only depth as it is seen from each shadowcasting light source.
The problem is that the depth buffer seems to be non linear by default.

Can I make the buffer linear without compability problems?

It there a fast function that can convert it to the real depth in HLSL?

Initialization of the depth buffer:

// Depth buffer
D3D11_TEXTURE2D_DESC DepthTextureDescription = {
NewWidth,//UINT Width;
NewHeight,//UINT Height;
1,//UINT MipLevels;
1,//UINT ArraySize;
DXGI_FORMAT_R32_TYPELESS,//DXGI_FORMAT Format;
1, 0,//DXGI_SAMPLE_DESC SampleDesc;
D3D11_USAGE_DEFAULT,//D3D11_USAGE Usage;
D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE,//UINT BindFlags;
0,//UINT CPUAccessFlags;
0//UINT MiscFlags;
};

// Depth buffer
if (m_pd3dDevice->CreateTexture2D( &DepthTextureDescription, NULL, &Surface->DepthBuffer ) != S_OK) {
::MessageBoxW(NULL,L"Could not create the depth atlas.",L"Internal error!",NULL);
}

// Depth I/O
D3D11_DEPTH_STENCIL_VIEW_DESC DepthIODescription = {
DXGI_FORMAT_D32_FLOAT,
D3D11_DSV_DIMENSION_TEXTURE2D,
0
};
if (m_pd3dDevice->CreateDepthStencilView( Surface->DepthBuffer, &DepthIODescription, &Surface->DepthInputAndOutput ) != S_OK) {
::MessageBoxW(NULL,L"Could not create depth stencil view for the depth atlas.",L"Internal error!",NULL);
}

// Depth output
D3D11_SHADER_RESOURCE_VIEW_DESC TextureOutputDescription = {
DXGI_FORMAT_R32_FLOAT,
D3D11_SRV_DIMENSION_TEXTURE2D,
0,
0
};
TextureOutputDescription.Texture2D.MipLevels = 1;
if (m_pd3dDevice->CreateShaderResourceView( Surface->DepthBuffer, &TextureOutputDescription, &Surface->DepthOutput ) != S_OK) {
::MessageBoxW(NULL,L"Could not create shader resource view for the depth atlas.",L"Internal error!",NULL);
}
Advertisement
If you want to store linear depth you have to create a render target and write linear depth to it using a shader.
Or at least, I never saw a way to do it with DepthStencilViews
If you want to store linear depth you have to create a render target and write linear depth to it using a shader.<br />Or at least, I never saw a way to do it with DepthStencilViews


I have seen shadows being done with only a depth buffer in the DirectX SDK but I can't figure out how it works.

[quote name='TiagoCosta' timestamp='1309122334' post='4827996']If you want to store linear depth you have to create a render target and write linear depth to it using a shader.<br />Or at least, I never saw a way to do it with DepthStencilViews


I have seen shadows being done with only a depth buffer in the DirectX SDK but I can't figure out how it works.
[/quote]

Well, in my "engine" I render shadows with non-linear depth...

Why cant you use non-linear depth?
[url="http://wiki.gamedev.net/index.php/D3DBook:Shadow_Maps"]
Check this link for some info on Shadow mapping
[/url]
Check here for HLSL code to write linear depth: http://www.mvps.org/directx/articles/linear_z/linearz.htm

Note however that depth buffer precision is going to fall off closer to the near plane, which may give highy undesirable results.

Most shadow mapping samples you'll see will use the depth buffer, but there's no golden rule saying that you have to, and you can in fact write regular non-linear depth (which could also double-up as a Z-prepass) to your depth buffer but write out the shadow info to a different render target.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


Why cant you use non-linear depth?


I don't know what non linear scale that is used by default nor how I can convert to and from it.
I must compare this unknown scale with a depth value from a linear matrix multiplication.

I must compare this unknown scale with a depth value from a linear matrix multiplication.


No you don't.

You create the shadow map depth buffer using the object world matrix and the light view projection matrix right?

So, the values that you want to compare are the value stored in the depth buffer and a new non-linear depth calculated using the object matrix and the light view projection matrix...:cool:

EDIT: Dont forget that you have to divide the object xyz by the w to complete the projection before comparing!!!!
There is no good way to store a linear depth value in a depth buffer. Mucking with your projection matrix like in that link mhagain provided can screw up the early z-cull and z compression algorithms used in graphics hardware, and will also screw up for triangles that cross the near-clipping plane. Here are a few alternatives:

1. If you manually write out depth from your pixel shader you have more options, but doing this can have serious performance implications (such as disabling early z-cull).

2. If you write your shadow depth out to a render target texture you can use whatever depth metric you'd like, but you will pay the cost of writing out that value in addition to writing out depth. Most modern graphics hardware have optimizations that are enabled when using depth-only rendering that you won't be able to take advantage of.

3. If you use a floating point depth buffer format, the floating point format naturally has more precision closer to 0 than closer to 1.0. Normally if you store post-projection z/w in this numerical format it makes the precision problems even worse, but if you flip the near and far planes the floating point precision distribution mostly cancels out the projection's distribution and you get a mostly-linear distribution of depth values.

If you do want convert from post-projection z/w to a linear depth value (the Z component of your view-space position), it's really easy to do so in HLSL:


// DepthBuffer is your depth-stencil buffer, Projection is your float4x4 projection matrix
float zw = DepthBuffer.Load(pixelCoord);
float linearZ = Projection._43 / (zw - Projection._33);


Anyway like TiagoCosta has mentioned, there's no implicit need to store linear depth in your shadow map depth buffer for a simple shadow comparison. When you're rendering your meshes with the light applied (or rendering the light in a deferred lighting pass) you transform the world-space pixel position by the same view * projection matrix used for rendering shadows. Then you divide XYZ by W, scale and offset X and Y to get UV coordinates, and use SampleCmp to sample the shadow map depth buffer at the corresponding UV coordinate with the specified depth.


float4 shadowPosition = mul(worldSpacePosition, ShadowViewProj);
float2 shadowUV = shadowPosition.xy / shadowPosition.w;
float shadowDepth = shadowPosition.z / shadowPosition.w;
shadowUV = shadowUV * float2(0.5f, -0.5f) + float2(0.5f, 0.5f);
float shadow = ShadowMap.SampleCmp(ShadowSamper, shadowUV, shadowDepth);


// DepthBuffer is your depth-stencil buffer, Projection is your float4x4 projection matrix
float zw = DepthBuffer.Load(pixelCoord);
float linearZ = Projection._43 / (zw - Projection._33);



The shadows are now working using that method. :)

This topic is closed to new replies.

Advertisement