Deferred Renderer and Shadow mapping

Started by
5 comments, last by kauna 9 years, 3 months ago

Hello Forum, hoping this is in the correct place. biggrin.png

I have implemented a deferred renderer with DirectX 11 using HLSL shaders. The process is as follows:

  • The first pass renders all information to separate textures like albedo, normals, worldPosition etc..
  • The second pass does all the lighting calculations.

I know that I could be working out the position using the depth, however that doesn't matter at the moment as the deferred lighting works.

The problem that is now arising is I want to include shadows into the procedure, most online tutorials assume forward rendering but I would believe the process is the same. At the moment I have simply added in the view and projection matrix values into the light buffer, these are bound within both passes as I'm trying to get the depth value of the light within the first pass and then used the shadow map on the second. Note the shaders do allow for multiple lights but I'm focusing on using one light to get the shadows working and then I can easily add the rest in.

This is what I have at the moment:

First pass


struct PS_OUTPUT
{
    //Other targets here
    /*ShadowMaps*/
    float4 light1ViewPos : SV_Target4;
    float4 light1ShadowMap : SV_Target5;
    /*ShadowMaps*/
};

VS_OUTPUT VS_Main(float3 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL)
{
    VS_OUTPUT output;
    //Camera world view projection transformations here..

    /*Light Map*/
    output.lightPos = mul(float4(inPos, 1.0f), worldMatrix);
    output.lightPos = mul(output.lightPos, lights[0]._viewMatrix);
    output.lightPos = mul(output.lightPos, lights[0]._projMatrix);
    
    /*Light Map*/

    return output;
}

PS_OUTPUT PS_Main(VS_OUTPUT input) : SV_TARGET
{
    PS_OUTPUT output;
    //other renderTarget values here..
    
    float depth = input.lightPos.z / input.lightPos.w;
    output.light1ViewPos = input.lightPos;
    output.light1ShadowMap = float4(depth, depth, depth, 1.0f);
    return output;
}

Second pass in the pixel shader:

pixel.lightViewPosition = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.texCoord);

pixel.lightViewPosition.xyz /= pixel.lightViewPosition.w;

if (pixel.lightViewPosition.x < -1.0f || pixel.lightViewPosition.x > 1.0f ||

pixel.lightViewPosition.y < -1.0f || pixel.lightViewPosition.y > 1.0f ||

pixel.lightViewPosition.z < 0.0f || pixel.lightViewPosition.z > 1.0f)

return output;

//transform clip space coords to texture space coords (-1:1 to 0:1)

pixel.lightViewPosition.x = pixel.lightViewPosition.x / 2 + 0.5;

pixel.lightViewPosition.y = pixel.lightViewPosition.y / -2 + 0.5;

float shadowMapBias = 0.2f;

pixel.lightViewPosition.z -= shadowMapBias;

float shadowMapDepth = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.lightViewPosition.xy).r;

worldPosition = WorldPositionTexture.Sample(ObjSamplerState, pixel.texCoord);

if (shadowMapDepth < pixel.lightViewPosition.z) return output;

output.color = float4(1.0f, 1.0f, 1.0f, 1.0f);

return output;

//Actual lighting calculations after here

The result I'm actually getting is quite odd... it seems to render shadows but they seem to move when the camera moves, meaning the are always in incorrect positions. I have been looking the cause of this for a couple of days with no hope, if any of you could point me to the right direction I would be very appreciated. On a side note the fustrum for the light is working as if I remove the if (shadowMapDepth < pixel.lightViewPosition.z) return output; Then I get a nice square of white light so its deffo something to do with the shadow map.

One question would be if I render the shadow map should it be white or go light to dark with depth because different tutorials show different things for example:

http://www.rastertek.com/dx11tut40.html

http://takinginitiative.wordpress.com/2011/05/15/directx10-tutorial-10-shadow-mapping/

Thanks for any help.

Advertisement

I think that you have a little misconception about the shadow mapping - when you render the shadow map from the lights point of view only thing you'll need to store is the depth value. Nothing else, unless you do some fancy more complex shadow mapping. Practically the shaders used for shadow map rendering could be the same as the shaders used for regular scene rendering since only thing you want is the depth value - well this works only when you use a hardware depth stencil target as shadow map format.

Any way - first you'll need to fix your shadow map rendering functions.

If I understand correctly, there could be a problem with the deferred light code which applies the shadow - in that code you'll need the screen pixels position in world or view space. This position must be transformed to the shadow maps space, in order to perform the calculations which you are doing.

Cheers!

Thanks for the replay Kauna.

If I understand correctly, there could be a problem with the deferred light code which applies the shadow - in that code you'll need the screen pixels position in world or view space. This position must be transformed to the shadow maps space, in order to perform the calculations which you are doing.

This is the part that is confusing me. How would I go about doing this.. I'm sure my shadow map is correct as I can now see it being displayed correctly but it shows up in the incorrect places and the shadows move when the camera moves. If you/or someone could show/advise me the correct way to do this I would be very grateful.

Also, I'm aware that I should only be outputting the depth of the shadow map, it was while I was testing. I was going off two different tutorials that do it 2 different ways so having them both just saved myself time, however these could be confusing the matter atm.

pixel.lightViewPosition = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.texCoord);

pixel.lightViewPosition.xyz /= pixel.lightViewPosition.w;

I'm not sure what this code does - is it the view space position stored in a texture? Why are you sampling the same texture twice:

float shadowMapDepth = Light1ViewPosMapTexture.Sample(ObjSamplerState, pixel.lightViewPosition.xy).r;

worldPosition = WorldPositionTexture.Sample(ObjSamplerState, pixel.texCoord);

If it is the view space position of the screen pixel then it is in the wrong space - you have to transform it by inverse view matrix (view space to world space) and then by the lights view-proj matrix (world space to lights projection space).

You also seem to sample a texture with "world position" but the data isn't used.

Cheers!

Yh, its a bit of a mess at the moment. I just wanted to get something working and clean it all up after... The World Position is being used elsewhere in the shader, it just got copied over along with the shadow mapping code.

I'm sorry but I don't see why I would need to use the inverse view matrix, (Math is not really a strong area of mine). Needing to think about multiple coordinate systems isn't fun... At the moment on the second pass, the vertex shader is just passing the normalised device coordinates through so I haven't even got the camera's view or projection matrix as I'm doing all that within the first pass. The first pass renders all the geometry and then the second basically uses a bunch of textures to perform the lighting.

How would I go from there to determine if each pixel is in shadow or not?


I'm not sure what this code does - is it the view space position stored in a texture? Why are you sampling the same texture twice:

Yes it is, I'm storing that from the first pass however like you said before I should only need to send the depth information. I didn't mean to there that's a bi-product of all this testing and changing things to see what happens, basically a hope for the best at the moment lol...

You use the inverse view projection matrix to take away the transform applied by the projection and view matrices. Without doing it, you'll still have the View's transform applied to the position you're reading, which would make sense considering your artifacts you report.

So, when rendering we apply the three transforms, World -> View -> Projection to convert a 3D position into a 2D position relative to a camera. The inverse view matrix will take a 2D position, (screen space), and convert it to a world space position.

Does that make sense?

Perception is when one imagination clashes with another

As explained earlier you'll need to convert the screen position pixel to the space of the shadow map.

In the deferred renderer you'll typically construct a view space position based on a depth value (either from z-buffer or a separated depth target) and the normalized device coordinates. You may also store the full view space position in a render target but it is naturally slower (due to bandwitdh usage). Check position reconstruction from depth on google.

So when you have your view space position calculated it can be transformed back to world space (ie. use inverse view matrix). This is nothing difficult, just inverse the view matrix on the CPU.

Since you are looking for a texel on the shadow map, you would transform a world space position by the lights view-proj matrix. So again, nothing difficult, just multiply the inverse view matrix from the previous step with the lights view-proj matrix.

At the end you'll have a matrix which transforms a view space position directly to a shadow map position - well you'll need to scale and offset the resulting coordinate (but even this could be added to the matrix).

On CPU:

-create inverse view matrix

-create shadow map matrix = inverse view matrix * lights view-projection matrix

On GPU:

- get pixels view space position using any method you find suitable

- multiply the position by the shadow map matrix

- scale and offset by 0.5

- sample and compare shadow map

OR

- get pixels view space position as before

- multiply the position by inverse view matrix

- multiply the result by lights view-proj matrix

- scale and offset by 0.5

- sample and compare shadow map

Cheers!

This topic is closed to new replies.

Advertisement