Sampling the depth buffer in a shader in DX11

Started by
12 comments, last by Zao 6 years, 10 months ago

Hi,

I want to sample the depth buffer in a screen space effect in a shader that draws a full screen quad, using the back buffer and depth buffer as input. Basically I want to adjust the alpha for every pixel based on its distance from the camera, so nearby objects get almost masked out while far away objects have alphas closer to 1.0. I create my depth-stencil buffer and view like this:


D3D11_TEXTURE2D_DESC depthStencilDesc;

depthStencilDesc.Width = mRenderWindow->getClientAreaWidth();
depthStencilDesc.Height = mRenderWindow->getClientAreaHeight();
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilDesc.SampleDesc.Count = 1;
depthStencilDesc.SampleDesc.Quality = 0;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0; 
depthStencilDesc.MiscFlags = 0;

VERIFY(mD3DDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer));
VERIFY(mD3DDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView));

I have a few questions.

1. Since I don't need to write to the depth buffer as I read from it in a shader, do I need to make any changes to the creation code above, or is it enough for me to unbind the depth buffer before binding it as a shader resource view?

2. Can I bind the mDepthStencilView as a shader resource view directly? Up till this point I have not sampled the depth buffer directly, only let the API use it in the depth test, so I have never thought of using it as an explicit shader input until now.

3. Since the format is DXGI_FORMAT_D24_UNORM_S8_UINT, that means that there are 24bits for the depth, right? How do I turn those into the floating point depth value between 0 and 1 in the pixel shader? Is there a special sampler I need to create or do I sample the xyz values and somehow combine those into a single value using bitwise operations in the shader?

Thanks for the help!

Advertisement

D3D11_TEXTURE2D_DESC depthStencilDesc;

depthStencilDesc.Width = mRenderWindow->getClientAreaWidth();
depthStencilDesc.Height = mRenderWindow->getClientAreaHeight();
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilDesc.SampleDesc.Count = 1;
depthStencilDesc.SampleDesc.Quality = 0;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0; 
depthStencilDesc.MiscFlags = 0;

VERIFY(mD3DDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer));
VERIFY(mD3DDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView));

1. Since I don't need to write to the depth buffer as I read from it in a shader, do I need to make any changes to the creation code above, or is it enough for me to unbind the depth buffer before binding it as a shader resource view?


First you have to change the Texture2D format to a typeless format (DXGI_FORMAT_R24G8_TYPELESS) and add the D3D11_BIND_SHADER_RESOURCE bind flag to the texture desc.
Then create a Depth Stencil View with the format DXGI_FORMAT_D24_UNORM_S8_UINT and a Shader Resource View with the format DXGI_FORMAT_R24_UNORM_X8_TYPELESS.


D3D11_TEXTURE2D_DESC depthStencilDesc;

depthStencilDesc.Width = mRenderWindow->getClientAreaWidth();
depthStencilDesc.Height = mRenderWindow->getClientAreaHeight();
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
depthStencilDesc.SampleDesc.Count = 1;
depthStencilDesc.SampleDesc.Quality = 0;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0;
depthStencilDesc.MiscFlags = 0;

D3D11_DEPTH_STENCIL_VIEW_DESC dsv_desc;
dsv_desc.Flags = 0;
dsv_desc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT ;
dsv_desc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
dsv_desc.Texture2D.MipSlice = 0;

D3D11_SHADER_RESOURCE_VIEW_DESC sr_desc;
sr_desc.Format                    = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
sr_desc.ViewDimenstion            = D3D11_SRV_DIMENSION_TEXTURE2D;
sr_desc.Texture2D.MostDetailedMip = 0;
sr_desc.Texture2D.MipLevels       = -1;

ID3D11ShaderResourceView* mShaderResourceView = nullptr;

VERIFY(mD3DDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer));
VERIFY(mD3DDevice->CreateDepthStencilView(mDepthStencilBuffer, &dsv_desc, &mDepthStencilView));
VERIFY(mD3DDevice->CreateDepthStencilView(mDepthStencilBuffer, &sr_desc, &mShaderResourceView));


2. Can I bind the mDepthStencilView as a shader resource view directly? Up till this point I have not sampled the depth buffer directly, only let the API use it in the depth test, so I have never thought of using it as an explicit shader input until now.


No. You have to use the shader resource view created using the code above.

3. Since the format is DXGI_FORMAT_D24_UNORM_S8_UINT, that means that there are 24bits for the depth, right? How do I turn those into the floating point depth value between 0 and 1 in the pixel shader? Is there a special sampler I need to create or do I sample the xyz values and somehow combine those into a single value using bitwise operations in the shader?


The shader resource view associated with the depth texture uses the format DXGI_FORMAT_R24_UNORM_X8_TYPELESS, so when you sample the Texture2D in the pixel shader the RED channel will contain "the floating point depth value between 0 and 1".

Using the depth buffer as you've described is similar to shadow mapping techniques.

Quick answer: your approach needs just a couple changes to be used as you're described. The following should work.


// make it compatible with both DXGI_FORMAT_D32_FLOAT (stencil view) and DXGI_FORMAT_R32_FLOAT (resource view)
depthStencilDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;

// Create the depth stencil view with:
    D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc;
    dsvDesc.Flags = 0;
    dsvDesc.Format = DXGI_FORMAT_D32_FLOAT;
    dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;

// and the resource view for the shader
    D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
    srvDesc.Format = DXGI_FORMAT_R32_FLOAT;
    srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
    srvDesc.Texture2D.MipLevels = 1; // same as orig texture

Ninja'd ph34r.png

EDIT: As TiagoCosta mentions, and which I did not, you'll need a shader resource view.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Thank you guys, especially TiagoCosta for your excellent explanation. It works great, but I seem to get something other than the normalized depth range out of the R component when sampling it in my shader. When drawing the resulting texture to screen I get white almost everywhere except when I move really close to an object, which suggests that the value in the component is actually a lot larger than one and only gets into the normalized range when moving really close to an object.

You said:


when you sample the Texture2D in the pixel shader the RED channel will contain "the floating point depth value between 0 and 1".

...but is it possible that I still have to scale the value into the normalized range myself?

I wrote a sample to read the depth buffer as I could not find one it on Codeplex https://depthstencil.codeplex.com/ it requires feature level 10 or above to work.

Hope it's of help.

Gareth

The depth buffer looking almost completely white when rendered is normal because it's non-linear.

You usually want to convert the depth to view space before you can make use of it in a shader.

There's some example code to do that at: http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

With a large ratio of near- to far-plane in the projection used to render the depth, most depths will, indeed, be closer to white. Keep that ratio as small as possible.


is it possible that I still have to scale the value into the normalized range myself?

Nope. Are you calculating the tex coords for the sample correctly? I.e., converting the screen position (range -1 to 1) to tex coords in the range 0 to 1?

texCoord.x = screenPosition.x / screenPosition.w / 2.0 + 0.5; // (similar for y)

Also, ensure that the screenPosition used to calculate the tex coords is the position with the worldViewProjection matrix used to render the depth, not the quad. That's normally calculated in the vertex shader using matrices in a cbuffer, different matrices than used to pass on the quad vertex position. EDIT: sorry - not applicable to a full screen quad.

You can also try some tricks with the pixel shader:


// play with the divisors
float temp = depthvalue / 2.0; // cool down the farther objects
// or
float temp = depthvalue * 2.0; // heat up the nearer objects
float4 color = saturate( float4( temp, 0, 0, 1.0 ) );
return color;

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.


Nope. Are you calculating the tex coords for the sample correctly? I.e., converting the screen position (range -1 to 1) to tex coords in the range 0 to 1?

I am not doing that conversion, but do I really need to? The output does look like it should (apart from what I expected the depth value to be, but that is probably because of the non-linearity like Adam_42 said).

I am rendering the output to a texture by drawing a fullscreen quad and mapping the original back buffer and depth stencil shader resource view as shader input textures. Then I sample both the textures using the "normal" 0 to 1 UV range and mask off certain parts of the back buffer according to the depth stored in the depth stencil view. like this:


float4 BackgroundPS(BackgroundVSOutput input) : SV_TARGET
{
	float4 original = backBuffer.Sample(bilinearSampler, input.Tex).rgba;
	float depth = depthBuffer.Sample(bilinearSampler, input.Tex).r;

	clip(depth < 0.9999f ? -1 : 1);

	return original;
}

This gives me a texture where only far away objects (basically skyboxes and impostors, ie. things drawn without depth) are rendered. I should not need to do any UV conversions for this, right? I am not sure where the -1 to 1 range comes into play...

No conversion needed (Buckeye corrected his post). Such conversions are only needed when going from normalized device coordinates to texture space (mapping -1..1 to 0..1), though that can be done in the vertex shader already (Edit: except for the 1/w).

By the way: What you're doing with that clip() should be possible with the depth buffer directly, no need to bind it as SRV. Draw your full screen quad with a fixed z at that threshold value and use a depth comparison of greater or greater-equal. Or set z to 1 and the comparison to equal (if your depth was cleared to 1 initially).


I am not doing that conversion, but do I really need to?

No, you don't.

.. Ninja'd by unbird with a more complete answer.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

This topic is closed to new replies.

Advertisement