Ah, so the shadows are supposed to be added into the lighting buffer output apparently. Since I wasn't the one who set this up originally I hadn't really done much research into the compositing stage of shadows, so I just assumed it was normal to add them in during the final gbuffer pass. Thanks for the help guys.
Is there a way to properly render layered transparent objects when you can't do depth sorting? Since the terrain is all built up into 2 vertex buffers per region (1 for solid blocks, 1 for transparent blocks), we can't do the typical depth sorting. Are there any alternatives?
Increasing the shadow bias fixed the remaining issue:
So it's pretty much working now. I'm getting just about every other problem possible though. Peter panning, jittering when moving, shadows disappearing when the object is behind the camera, and some weird issue where a triangle of shadow pops in on the right hand side (you can see it in the screenshot). The last two issues are particularly problematic though.
Edit: Switching to our old ESM shadow filtering (instead of PCF from the sample) mostly fixes issues 1, 2, and possibly 4. 3rd one is still something I'd really like to fix.
I can't get the reconstructed position to even come close to the position_ws_VS. I found something else out, however, while reading MJPs blog about reconstructing linear depth. The GetCorners function in SharpDX returns the corners in a different order than XNA. So I switched the indexes around and now the first shader output seems a little closer to what it should be (at least I think so...). It no longer rotates with the camera, but it's still moving with it:
world = Matrix.RotationYawPitchRoll(Yaw, Pitch, 0);
world.TranslationVector = pos;
Matrix.Invert(ref world, out view);
The world matrix is sent to the shader as the InverseView matrix. As for the coordinate system, we originally used XNA and then switched over to SharpDX, so to avoid having to change a ton of stuff, we stuck with the XNA system where Y=up and we use right-handed matrices.
Those are shots from just the rotation of the camera. So I guess that means the InverseView is broken...? Not really sure how that can be the case, there are other places where the InverseView is used and they don't have any problems.
That part of the code is actually changed from the sample. There's a comment on MJP's blog where a user uploaded a changed ComputeFrustum function that is supposed to reduce jitter that occurs when the camera is moving. The code is here: http://pastebin.com/Yn5SVPUP. Is that code actually wrong? Should I just use MJP's original version instead?