# DX11 Depth shadow mapping question

## Recommended Posts

Hi Guys,

I am trying to get shadow mapping working and while I understand the theory behind it, I am having trouble making it work in practice.

What I have so far;

• A directional light that is rendered on to a scene via shader.
• This same scene rendered to a RenderTargetView assigned to Texture2D slot 0 on the shader. (DXGI_FORMAT_R8G8B8A8_UNORM)
• A scene rendered from the point of the light of which the DepthStencilView is assigned to slot 1 on the shader. (DXGI_FORMAT_D24_UNORM_S8_UINT)

These are verified to be working well as far as I can see and the DirectX debug is reporting that everything is happy also.

I am now trying to figure out how to get the D24 part of the DSV so I can compare it against the z position of the scene vertices.

Essentially, how to I interpret the z value from the depth buffer as rendered from the light?

Many thanks in advance, shadow mapping is brand new ground for me.

Edited by DividedByZero
Clarifiaction

##### Share on other sites
Quote

I am now trying to figure out how to get the D24 part of the DSV

In case your code does not contain it:

Texture2D renderedShadowMap : register (t0);
Texture2D depthShadowMap : register (t1);
sampler samplerLinear : register (s0);

The correct filter for the sampler is D3D11_FILTER_MIN_MAG_LINEAR_MIP_POINT and you can otherwise use the defaults in the docs for D3D11_SAMPLER_DESC.

To access the map:

shadowMapZ = depthShadowMap.Sample(linearSampler, shadowLocality).r;

where shadowZ is a scalar float and shadowLocality is a float2. I will explain how to compute shadowLocality in a moment.

Quote

so I can compare it against the z position of the scene vertices.

While it is theoretically possible to do shadow mapping using such a comparison I do not recommend using that approach. The main problem is the homogeneous divide the hardware performs before the depth-stencil comparison. (Homogeneous divide allows the non-linear perspective projection to be done while specifying only linear matrices.) As a result, the Z values are packed together closer to the camera than they are in the distance. There is no way to turn this divide off in D3D11, and since the viewer-Z and shadow-Z vectors go in different directions the math gets complicated.

Here is what I do to avoid this problem and it also makes computing shadowLocality easy:

Computing the shadow map

This is for a directional light. Spot lights are more complicated.

The shadow map needs to be use the orthogonal projection. This can be done by using XMMatrixOrthographicLH() in computing the map. The input values comprise a box along the light vector; width and height should be large enough to cover the shadowed area (larger values will introduce blockiness), nearZ can be 0 and farZ should be large enough to cover the max distance from the light source to anything in the scene.

The pixel shader needs to output a "pure" Z, and a normally rendered scene from the light's angle is not needed. The pixel shader's output can simply be the .z value from the SV_POSITION member of its input struct:

float4 shadowMapPixelShader(shadowPSInput_s input) : SV_TARGET
{
float z = input.position.z;
return float4(z, z, z, 1);
}

The DSV is still needed to compute the shadow map correctly. If you render the map objects in the distance should be white and those close up should be black.

Final pass

The constant buffer for the vertex shader will need to hold both the matrices for the shadow map and those for rendering the scene normally. In addition to multiplying by the "regular" matrices multiply each vertex by the shadow matrices (as was done when the map was created). We now have a recreation of the shadow Zs that were used when creating the map and later the comparison will be simple. We've also almost computed shadowLocality in the process. Pass the resulting float4 to the pixel shader:

struct finalPSInput_s
{
float4 position : SV_POSITION;
//anything else needed
};

To compute shadowLocality: The Xs and Ys in shadowPosition range from -1 to 1. As you know, Xs and Ys in a texture range from 0 to 1. Also, Ys are inverted in the rasterization process. The mapping equations are therefore:

localityX = shadowX * 0.5 + 0.5;

localityY = 0.5 - shadowY * 0.5;

After sampling the shadow map, if (shadowPosition.z + bias > shadowMap.z) the pixel is in shadow, otherwise it is not. The bias is necessary to prevent artifacts and will depend on the size of the shadow box (you'll need to experiment to find the best value).

So here is the pixel shader (and we don't need the shadow DSV anymore):

//at this point, the pixel should be ready to output except with only ambient light
shadowLocality.x = shadowPosition.x * 0.5 + 0.5;
//use the RTV, not the DSV
{
//pixel is not in shadow and light color should be added
}
//output the pixel



-- blicili

##### Share on other sites

Awesome information @blicili. I have made some slight modifications to my code and am up to your heading of 'Final Pass'. Most of my code was very similar to that point, so I at least laid reasonable foundation to that point.

From here on is new though, so I'll keep you posted. Thanks again for your help, it is hugely appreciated.

##### Share on other sites

I have made some significant adjustments to my shaders. Although the scene is still getting rendered as per normal with no hint of any shadows yet.

I'll attach what I have, but be warned, quite a bit of refactoring to do once it works - LOL.

SamplerState samLinear : register(s0);
Texture2D squareMap : register(t0);
Texture2D squareMapDepth : register(t1);

cbuffer WVPCB : register(b0)
{
float4x4 matWorld;
float4x4 matView;
float4x4 matProjection;
}

cbuffer LIGHT : register(b1)
{
float4x4 matLight;	// Unused
float4 lightPosition;
float4 lightAmbient;
}

cbuffer SHADOW : register(b2)
{
}

struct VS_INPUT
{
float4 position : POSITION;
float3 normal : NORMAL;
float2 textureCoord : TEXCOORD0;
};

struct VS_OUTPUT
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
float2 textureCoord : TEXCOORD0;

float4 lightPosition : COLOR1;
float4 lightAmbient : COLOR2;
float3 fragmentPosition : COLOR3;

float4 shadowPosition : COLOR4;
};

VS_OUTPUT vs_main(VS_INPUT input)
{
VS_OUTPUT output;

output.fragmentPosition = input.position;
output.normal = input.normal;

output.position = mul(input.position, matWorld);
output.position = mul(output.position, matView);
output.position = mul(output.position, matProjection);

output.fragmentPosition = mul(input.position, matWorld);
output.normal = mul(input.normal, matWorld);

output.textureCoord = input.textureCoord;
output.lightPosition = lightPosition;
output.lightAmbient = lightAmbient;

return output;
}

SamplerState samLinear : register(s0);
Texture2D squareMap : register(t0);
Texture2D squareMapDepth : register(t1);

struct VS_OUTPUT
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
float2 textureCoord : TEXCOORD0;
float4 lightPosition : COLOR1;
float4 lightAmbient : COLOR2;
float3 fragmentPosition : COLOR3;

float4 shadowPosition : COLOR4;
};

float4 ps_main(VS_OUTPUT input) : SV_TARGET
{
// Depth buffer display
if (input.lightAmbient.a == 2.0f)
{
float col = input.position.z;// squareMap.Sample(samLinear, input.textureCoord);
return float4(col, col, col, 1);
}

float bias = 0.0f;		// Not worried about artefacts at this stage

//at this point, the pixel should be ready to output except with only ambient light
shadowLocality.x = shadowPosition.x * 0.5 + 0.5;
//use the RTV, not the DSV
{
// Render shadow as complete black for testing purposes
return float4(0, 0, 0, 1);
}

// Render as per normal
float3 lightColor = input.lightAmbient;
float ambientStrength = input.lightAmbient.w;
float3 ambient = ambientStrength * lightColor;

float3 norm = normalize(input.normal);
float3 lightDir = normalize(input.lightPosition - input.fragmentPosition);

float diff = max(dot(norm, lightDir), 0.0f);

float3 diffuse = diff * lightColor;
float3 result = (ambient + diffuse);

return squareMap.Sample(samLinear, input.textureCoord) * float4(result, 1.0f);
}

Hopefully I am heading in the right direction.

Thank you once again for your help. It is hugely appreciated. 😎

##### Share on other sites

I found an error; the computation of the shadow position in the Vertex Shader should be

// Shadow position
output.shadowPosition = mul(output.shadowPosition, matProjectionShadow);

This error may also exist in the shader which renders the shadow map itself.

Your code looks conceptually correct, however. 😃

I've located my shadowing code (it's been on the list of things to go into my game engine). Let me know if you would like me to post the relevant parts.

##### Share on other sites

Cool, thanks for the heads up. I have corrected that part.

Still renders without any shadowing though.

The problem might be that I am still comparing against the DSV where you have commented '//use the RTV, not the DSV'.

I'll have to make some adjustments to change this.

Where does the shadow map DSV come in to play? I'm not really seeing anywhere it gets referenced.

Thanks again

##### Share on other sites

If it is of any assistance, these are the resources the shader is seeing when trying to do the shadows (as grabbed from the VS Graphical Debugger).

Slot 0 being the scene pass and Slot 1 being the light DSV.

##### Share on other sites

It's important to understand the following:

RTV - Render-Target View -- an interpretation (view) of a texture's memory, connected to the Output Merger (SV_Target0..7)

DSV - Depth-Stencil View -- an interpretation of a texture's memory, connected to the Output Merger (depth-stencil)

SRV - Shader-Resource View -- an interpretation of a texture's memory, connected as an INPUT to any shader stage (referenced in shader on slots t0 ... t127)

(and UAV - but that's out of scope of this topic)

From a single 2D texture object, you can create a RTV, DSV, SRV, RTV+SRV and DSV+SRV (no other combinations). Only SRVs can be read by shaders.

1) Are you using D3D11_CREATE_DEVICE_DEBUG? EDIT: I see you mentioned you see the debug.

2) Don't use DXGI_FORMAT_D24_UNORM_S8_UINT, you don't need stencil for your shadowmaps. Instead, use DXGI_FORMAT_R32_TYPELESS for the texture object itself, DXGI_FORMAT_D32_FLOAT for its DSV and DXGI_FORMAT_R32_FLOAT for its SRV (remember the DSV and SRV are created from the same texture object, they are just 'views').

Edited by pcmaster

##### Share on other sites

Hi @pcmaster thanks for your help.

I have changed the formats to the ones you have listed in point 2 but the fault still remains, no shadowing

And yes I am using the debug device in conjunction with the Graphics Debugger. Looking at the graphical debugger right now, I can see that I am sending the depth map to the shader on texture slot 1. But later in the same cycle the scene gets rendered normally with no shadow effect.

Hopefully I'm not doing something silly. I have been working on this problem for the past 12 hour continuously - LOL.

Many thanks once again.

##### Share on other sites

It always is silly, don't worry

I see in your code, you're sampling from the squareMap (t0), but you most probably wanted squareMapDepth (t1). I mean for shadowMapZ. Hm?

Also be sure to remove the confusing comments about RTV and DSV, because you can't sample from either in any shader for the reasons I stated above.

Edited by pcmaster

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

(You must login to your GameDev.net account.)

• 9
• 11
• 9
• 23
• 18
• ### Forum Statistics

• Total Topics
634432
• Total Posts
3017386
×