Cascaded Shadow Map Issue
Ok cool, one potential issue down
Another thing I noticed, In your original images the shadow maps look upside down. This could be a slimDX thing? And maybe you're compensating for that? So maybe try this... In your .fx file invert the .y element of your texture coordinates when you are checking to see if a pixel is in shadow or not. So somewhere in there you will have something like:
tex2D(ShadowMapSampler, vShadowTexCoord)
Try:
vShadowTexCoord.y = 1.0 - vShadowTexCoord.y;
tex2D(ShadowMapSampler, vShadowTexCoord)
One last thing. Use just 1 cascade instead of 4 and see if the problem is still there. Other then that I'm out of Ideas from what you've posted sorry.
Shadow mapping and skeletal animation systems are the best way to get a headache. :)
Can't tell you what is wrong, but I would start with some debugging setup which could help pinning down the bug.
1. Start with one cascade as Nyssa said.
2. Place some recognizable shadow caster , a sphere, a box and a pyramid. The latter is really helpful to check if the texture is upside-down.
3. Draw the full shadow-camera frustums as object.
Heres what it looks like with 1 cascade with a recognizable object in the scene:
[attachment=13682:RuinValor 2013-02-17 17-12-36-25.png]
And again, with the camera moved to the side a bit (the light position hasn't changed):
[attachment=13685:RuinValor 2013-02-17 17-12-48-57.png]
Edit: I should probably note, those lighter colored shadows directly beneath the blocks are static shadows, completely unrelated to this. Ignore them.
When you move the camera, the shadow seems to shift. Therefor I guess, that your back transform works up to the projection part (viewport.
Well, check your back transformation pipeline (view->world->light space->light frustum), something like this
C = camera transform (in world space)
L = shadow camera transform (in world space)
Assumption: you got a pixel position pos_v in view space reconstructed from the framebuffer
// 1. View to world space
pos_w = C * pos_v // NOT the inverse camera transform
// 2. world to light space
pos_l = inverse(L) * pow_w
// 3. Project on shadow frustrum
pos_final = L_proj * pos_l
// 4. get shadow texel
shadow_texel = 2dShadow( shadow_map, pos_final)
// Reconstruct view-space position from the depth buffer
float pixelDepth = DepthMap.Sample(DepthMapSampler, input.TexCoord).r;
float4 position = float4(pixelDepth * input.FrustumCornerVS, 1.0f);
The comment from the sample says that position is going to be in view-space, which fits the assumption your equation makes. I've never seen this kind of position reconstruction before and don't really understand it, so I can only assume it's doing what it says.The next part is:
float4x4 inverseLVP = mul(InverseView, lightViewProjection);
float4 positionLight = mul(position, inverseLVP);
InverseView here is the inverse of the player's camera's view matrix. lightViewProjection is the View * Projection matrix for the cascade camera this pixel is in. Broken down, mul(position, InverseView) matches your step 1. inverse(L) should be the view matrix for the cascade camera and L_proj the projection matrix. So in mine it's combined into 1 matrix. So I think it is going through the right transforms, right?
This process seems correct, but I think that one or more matrices are wrongly calculated. When looking at your screenshots, the sign is pointing to the left on your shadowmap (middle). The lighting on the sign in the screenshot supports this (light coming from the right side of the screenshot), therefor the shadow should fall to the left, but it falls to the right.
Check and debug your InverseView first, it should be used to put your pixels into worldspace. You could try to colorencode the world position relative to a reference point and a scale factor to check it, it should be stable, even if you rotation your camera. Testshader:
world_position = InverseView * view_position;
SCALE = 1.0/100.0; // units ?
color_encoded_position = (world_position - camera_position) * SCALE;
color_encoded_position = clamp(0.0,1.0,color_encoded_position* 0.5 + 0.5)
output_color = color_encoded_position
This should color the world in a moving, axis aligned 3d cube centered at your camera. Rotation should not effect the coloring, and movement should shift the cube.
[attachment=13689:RuinValor 2013-02-18 02-41-34-17.png]
[attachment=13690:RuinValor 2013-02-18 02-41-45-96.png]
[attachment=13691:RuinValor 2013-02-18 02-41-50-60.png]
Yes, the camera position, what happens if you stand still and rotate the camera only ? In this case the "terrain texture" should not change, if it change, your InverseView matrix is most likely broken.
An other test, if you only move the camera along the lookat,right axis (no rotation) the color pattern should stay the same (like a projected texture pointing along the up-vector centered at the camera).