Deferred renderer + shadow mapping

Started by
6 comments, last by lipsryme 11 years, 3 months ago

A quick question.

Do I understand it correct, that I have to render my entire geometry twice for directional light shadows ?

1. Shadow map pass

2. Render geometry again for the projection ?

That sounds horrible :/

Advertisement
No, the advantage of deferred rendering is, that you don't need to render the whole geometry again for the projection part, a fullscreen quad or some bounding volume is enough. In this case you reconstruct the world position from the g-buffer and use it to project it into light space to access the shadow map.
Well, he still needs to render geometry for shadow map with different projection matrix. I think thats what he asked? But thats not really connected to deferred rendering. (according to his 2 bullets at least, maybe he meant what you answered actually)

Where are we and when are we and who are we?
How many people in how many places at how many times?
@noizex no I know that a geometry pass is needed for the light's depth (shadow map), but what I don't know is how to project this using a Fullscreen-quad pass (?)

@Ashaman73 Can you tell me how this works or do you have some sample code, because I've been trying to implement this for quite some time without any luck this far...
How do I project this onto the quad texture instead of geometry ?
I'm talking about a directional light which is implemented as a FSQ post effect and using orthographic projection for the shadow map.
When applying the directional light as a FSQ you will reconstruct the world space position for the pixel being shaded. This world space position can then be transformed into light space and the shadow map and comparison done. Is it this transform from world space to light space that you are having trouble with?

I am using function for projection and depth comparison:


//Position - world position of objects from G-buffer
float shadowCast(float3 Position)
{
	float4 texcoord = mul(float4(Position, 1.0), LightInvertViewProjection);
	texcoord.x = ((texcoord.x / texcoord.w) * 0.5 ) + 0.5;
	texcoord.y = ((texcoord.y / texcoord.w) * -0.5 ) + 0.5;
	depth = tex2D(shadows, texcoord.xy);
	return (texcoord.z < length(depth)*bias+offset);
}

It works well for me, don't forget to tweak bias and offset to reduce acne and peter paning smile.png

.

For your original question, yes, you do need to render your geometry once per every light (though depth only) and once for the final image from camera.

I am using function for projection and depth comparison:


//Position - world position of objects from G-bufferfloat shadowCast(float3 Position){	float4 texcoord = mul(float4(Position, 1.0), LightInvertViewProjection);	texcoord.x = ((texcoord.x / texcoord.w) * 0.5 ) + 0.5;	texcoord.y = ((texcoord.y / texcoord.w) * -0.5 ) + 0.5;	depth = tex2D(shadows, texcoord.xy);	return (texcoord.z < length(depth)*bias+offset);}
It works well for me, don't forget to tweak bias and offset to reduce acne and peter paning smile.png
.

Wait why is it the inverse of the view projection ?
What would the result of world space position multiplied by an inverse view projection be...again world space but this time in the light's space ?

UPDATE:
oh my god it works ;)

The problem I had all the time was that the view space reconstruction I had used did not work for the shadow projection.

Not sure why it works with my lighting though...

For anyone interested this is the code that works (ofc without any optimization):


VSO VS(VSI input)
{
    VSO output = (VSO)0;

    output.Position = input.Position;
	output.UV = input.UV;

	float3 positionVS = mul(input.Position, InverseProjection);
	output.ViewRay = float3(positionVS.xy / positionVS.z, 1.0f);


    return output;
}




float4 PS(VSO input) : SV_TARGET0
{
	float4 output = float4(0.0f, 0.0f, 0.0f, 1.0f);

	float3 ViewRay = input.ViewRay.xyz;
	float depth = DepthTarget.Sample(PointSampler, input.UV).r;

	float nearClip = 0.01f;
	float farClip = 100.0f;

	float ProjectionA = farClip / (farClip - nearClip);
	float ProjectionB = (-farClip * nearClip) / (farClip - nearClip);
	
	float linearDepth = ProjectionB / (depth - ProjectionA);
	float3 PositionVS = ViewRay * linearDepth;

	float4 PositionWS = mul(float4(PositionVS, 1.0f), InverseView);

	float4 texcoord = mul(PositionWS, LightInverseViewProjection);
	texcoord.x = ((texcoord.x / texcoord.w) * 0.5f ) + 0.5f;
	texcoord.y = ((texcoord.y / texcoord.w) * -0.5f ) + 0.5f;
	float shadowdepth = ShadowMap.Sample(PointSampler, texcoord.xy);

	float offset = 0.002f;

	float shadowFactor = (texcoord.z < length(shadowdepth) + offset);

	return shadowFactor;
}

This topic is closed to new replies.

Advertisement