Members - Reputation: 105
Posted 22 May 2012 - 09:33 AM
Before I show you the problem, I'd like to explain how the preparation is done. I'm using a plain deferred shading method, writing to four render targets, one of them being the depth buffer. The depth is encoded using the view space position length metod, meaning the code to save the depth is quite simply:
Depth = length(ViewSpacePosition.xyz);
The shadows however are rendered to their own buffer using the plain depth method:
Shadow = Position.z / Position.w;
In my global light shader, which is done in screen space, I obviously have to get the world space position for every pixel, so that I can compare the depth value from the shadow buffer to the world space depth for the pixel. I do this by using the method posted here: http://mynameismjp.w...n-from-depth-3/.
Short version, you create a set of view space rays in the vertex shader, which you let the rasterizer interpolate, and then normalize the interpolated view space ray in the pixel shader. Using this ray, it is possible to recreate the world space position by quite simply taking the camera position, adding the normalized view space ray, multiplied by the sampled value describing the view space position length:
float4 WorldSpacePosition = CameraPosition + ViewRay * DepthBuffer.Sample(DefaultSampler, QuadUV);
I use the Z value of the world space position to compare it to the shadow value. This is the result:
Looks good, doesn't it? Well, it is! The problem is when I zoom out, this happens:
I have been playing around with the shadow bias, and it helps if I take it really really high, but that also causes the shadows to render incorrectly.
I appreciate any help I can get! Thanks in advance!