Depth texture empty (shadowmapping)

Started by
5 comments, last by pseudomarvin 8 years, 11 months ago

I am working on shadowmapping in D3D11 and my problem is that nothing is ever written to the depth texture into which I am rendering the scene. The texture is sampled correctly and I have also tried to draw the scene into a color texture and sample that. It worked fine too. I thought that perhaps the comparison function was somehow wrong, and tried changing it around but it did not help (I should have probably expected that since depth testing works fine when rendering the scene). I also make sure to unbind the texture after sampling it.

Maybe there is a problem in the initialization code (I am not sure whether the types I am using are 100% correct) or somewhere else. Any help would be greatly appreciated.


// SHADOWMAP RENDERTARGET INITIALIZATION

	ID3D11Device *device = renderer->device;

	D3D11_TEXTURE2D_DESC textureDesc = { 0 };
	textureDesc.Width = (UINT)renderer->screenSize.x;
	textureDesc.Height = (UINT)renderer->screenSize.y;
	textureDesc.MipLevels = 1;
	textureDesc.ArraySize = 1;
	textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
	textureDesc.SampleDesc.Count = 1;
	textureDesc.SampleDesc.Quality = 0;
	textureDesc.Usage = D3D11_USAGE_DEFAULT;
	textureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
	
	ID3D11Texture2D *tmpDepthStencil;
	HRESULT hr = renderer->device->CreateTexture2D(&textureDesc, NULL, &tmpDepthStencil);
	CHECK_WIN_ERROR(hr, "Error creating depth stencil\n");

	CD3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc(D3D11_DSV_DIMENSION_TEXTURE2D, DXGI_FORMAT_D24_UNORM_S8_UINT);

	hr = renderer->device->CreateDepthStencilView(tmpDepthStencil, &depthStencilViewDesc, &renderTarget->depthStencilView;
	CHECK_WIN_ERROR(hr, "Error creating depth stencil view\n");

	if (SUCCEEDED(hr))
	{
		D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc = {};

		shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;

		shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
		shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
		shaderResourceViewDesc.Texture2D.MipLevels = 1;

		hr = renderer->device->CreateShaderResourceView(tmpDepthStencil, &shaderResourceViewDesc, 
                     &renderTarget->depthTexture);
		CHECK_WIN_ERROR(hr, "Error creating render target texture\n");
	}
	else
	{
		OutputDebugStringA("Error creating render target\n");
	}

    RELEASE_DX_RESOURCE(tmpDepthStencil);

// CLEARING AND BINDING THE SHADOWMAP RENDERTARGET
renderer->context->ClearDepthStencilView(renderTarget->depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
ID3D11RenderTargetView *target[] = { NULL};
renderer->context->OMSetRenderTargets(1, target, renderTarget->depthStencilView);


// SHADOWMAPPING VERTEX SHADER (There is no pixel shader bound)
cbuffer PerSceneBuffer: register(b0)
{
matrix projection;
matrix view;
float3 lightDir;
};

cbuffer PerModelBuffer: register(b1)
{
matrix model;
};

struct VertexShaderInput
{
float3 pos : POSITION;
float3 col : COLOR;
float3 normal : NORMAL;
};

struct VertexShaderOutput
{
float4 pos : SV_POSITION;
};

VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;

float4 pos = float4(input.pos, 1.0f);
pos = mul(model, pos);
pos = mul(view, pos);
pos = mul(projection, pos);
output.pos = pos;

return output;
}


// TEXTURE SAMPLER USED
D3D11_SAMPLER_DESC samplerDesc = {};
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
HRESULT hr = renderer->device->CreateSamplerState(&samplerDesc, &renderer->texSampler);
CHECK_WIN_ERROR(hr, "Error creating Sampler \n");
Advertisement
Your code for creating the depth texture and corresponding DSV + SRV looks correct, and so does your vertex shader code. If I were you, I would take a frame capture using RenderDoc to see what's going on. First, I would check the depth texture after rendering shadow casters to see if it looks correct. Keep in mind that for a depth texture, if you used a perspective projection then it will appear mostly white by default. To get a better visualization, use the range slider to set the start range to about 0.9. If the depth texture looks okay, then I would check the draw call where you use the shadow map to make sure that your textures and samplers are bound correctly.

As for that sampler state that you've created, how exactly are you using it? Are you trying to use it with a SamplerComparisonState in your pixel shader? Or are you just using a regular SamplerState for sampling from your shadow map texture?

Either way, always make sure that you've created your device with the D3D11_CREATE_DEVICE_DEBUG flag when you're debugging problems like this. It will cause D3D to output warnings to your debugger output window whenever an error occurs due to API misuse.

Thanks for your help MJP, I have got it working finally. There were a few problems, I was sending incorrect amount of data when updating the constant buffers but the one thing that seems to make the biggest difference even now after fixing the bugs is that I am using an orthographic projection matrix when rendering the shadowmap instead of the regular projection matrix which I use when rendering the scene. For some reason I did not get any shadows when using the former although I think that I should have seen at least some. I am using directional light (I'm not sure whether that makes any difference).

Pseudomarvin, can you show the code you use to create the ortho projection matrix? One possible reason why you dont get any shadows is that no objects are actually inside the view volume, so no depth information is rendered. An easy way to check that would be to render the scene from your directional light source - ie, use its view and projection matrices instead of the camera ones. If you don't see anything drawn to the screen, some of the values must be incorrect.

Also, I believe it is preferable to use point sampling to sample the shadow map (and to do the filtering manually in the shader, if you need it). You dont want to interpolate the depth values from the adjacent texels, rather the shadowed/lit samples which you get after comparing the fragment depth with the depth values from the texture.

1.) Point sampling: So should I do something like:


samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;

or is there an even better way? I have seen some people use SamplerComparisonState, but I'm not sure whether there are any benefits to it.

2.) I am actually satisfied with using the ortho projection for shadowmapping and I don't really need to solve why the perspective projection does not work so we don't have to really solve this (although it would be nice to know). I am using a right-handed system, matrices look like this (in column major order):


Math::GetOrthographicsProjectionRH(-30, 30, -17.f, 17.f, 25.f, -70)
Math::GetPerspectiveProjectionDXRH(Math::Deg2Rad(45), screenSize.width / screenSize.height, 0.1f, 400)

 Matrix4x4 Math::GetOrthographicsProjectionRH(float left, float right,
	float bottom, float top,
	float near, float far)
{
	Matrix4x4 result = { 0 };
	result[0] = 2.0f / (right - left);
	result[5] = 2.0f / (top - bottom);
	result[10] = 1.0f / (far - near);
	result[12] = -(right + left) / (right - left);
	result[13] = -(top + bottom) / (top - bottom);
	result[14] = -(near) / (far - near);
	result[15] = 1;
	return result;
}

const Matrix4x4 Math::GetPerspectiveProjectionDXRH(const float fov, const float aspectRatio,
	const float near, const float far)
{
	float cotFov = 1 / (tanf(fov / 2.0f));
	
	Matrix4x4 result = { 0 };

	result[0] = cotFov / aspectRatio;
	result[5] = cotFov;
	result[10] = -far / (far - near);
	result[11] = -1.0f;
	result[14] = -near * far / (far - near);

	return result;
}

I have just rendered the scene from the light's POV using first the custom ortho projection for the shadow map and then the perspective projection I use for the rest of the scene. I then displayed the content of the depth texture to the scene. In both cases the texture contained the depth writes for the cube which I was displaying and they looked correct. It may be possible that the values written when using perspective projection fail the valueInDepthTexture < calculatedDepth test when calculating shadows in the pixel shader.


// VERTEX SHADER
cbuffer PerSceneBuffer: register(b0)
{
	matrix projection;
	matrix view;
};

cbuffer PerModelBuffer: register(b1)
{
	matrix model;
};

cbuffer ShadowBuffer: register(b2)
{
	matrix shadowVP;
};

cbuffer LightBuffer: register(b3)
{
	float3 lightDir;
};

struct VertexShaderInput
{
	float3 pos : POSITION;
	float3 col : COLOR;
	float3 normal : NORMAL;
};

struct VertexShaderOutput
{
	float4 pos : SV_POSITION;
	float4 shadowPos : POSITION1;
	float3 col : COLOR;
	float3 normal : NORMAL;
	float3 lightDir : LIGHT;
};

VertexShaderOutput main(VertexShaderInput input)
{
	VertexShaderOutput output;

	float4 pos = float4(input.pos, 1.0f);
	pos = mul(model, pos);
	pos = mul(view, pos);
	pos = mul(projection, pos);
	output.pos = pos;

	float4 shadowPos = float4(input.pos, 1.0f);
	shadowPos = mul(model, shadowPos);
	shadowPos = mul(shadowVP, shadowPos);
	output.shadowPos = shadowPos / shadowPos.w;

	output.col = input.col;

	float4 normal = float4(input.normal, 0.0f);
	normal = mul(model, normal);
	normal = normalize(normal);
	output.normal = normal.xyz;

	output.lightDir = lightDir;

	return output;
}

//PIXEL SHADER
Texture2D screenTexture;
SamplerState textureSampler;

struct PixelShaderInput
{
	float4 pos : SV_POSITION;
	float4 shadowPos : POSITION1;
	float3 col : COLOR;
	float3 normal : NORMAL;
	float3 lightDir : LIGHT;
};

float4 main(PixelShaderInput input) : SV_TARGET
{
	float3 surfaceToLight = -normalize(input.lightDir);
	
	float brightness = dot(input.normal, surfaceToLight);
	brightness = max(0.1, saturate(brightness));

	float shadow = 0.0f;
	float calculatedDepth = input.shadowPos.z - 0.002f;
	float2 texCoord = float2(input.shadowPos.x * 0.5f + 0.5f, -input.shadowPos.y * 0.5f + 0.5f);
	float valueInDepthTexture = screenTexture.Sample(textureSampler, texCoord).r;
	if (valueInDepthTexture < calculatedDepth)
	{
		shadow = 0.5f;
	}

	float4 albedo = float4(input.col, 1.0f);
	float4 result = brightness * (1.0f - shadow) * albedo;

	return result;
} 

Thanks.

SamplerComparisonState lets you use the hardware's PCF, which is generally faster than doing it manually in the shader. Basically you get 2x2 PCF at the same cost as a normal bilinear texture fetch, which is pretty nice.

To use it, you want to create your sampler state with D3D11_FILTER_COMPARISON_MIN_MAG_MIP_LINEAR and D3D11_COMPARISON_LESS_EQUAL. Then in your shader, declare your sampler as a SamplerComparisonState, and sample your shadow map texture using SampleCmp or SampleCmpLevelZero. For the comparison value that you pass to SampleCmp, you pass the pixel's projected depth in shadow space. The hardware will then compare the pixel depth against the depth from the shadow map texture, and return 1 when the pixel depth is less than or equal to the shadow map depth.

Thanks for the tip, I've modified the code to do it the faster way.

This topic is closed to new replies.

Advertisement