Cascaded Shadow Map Issue
The inverse view matrix should convert your view space into worldspace and in world space the positions of your pixels are fix, therefor the rotation of your camera (camera position is fix) should not change them. So, what is your view matrix and how do you calculate its inverse ?
There have been many suggestions here already so I'm just going out on a limb to say this- have you accounted for XNA's coordinate system orientation? You said that you are adapting an XNA code sample for use in DirectX/SharpDX. XNA uses a right-handed coordinate system while DirectX's is left-handed, so the Z values are flipped the other way around. This could cause odd behavior in rotation matrices when you are applying XNA code as-is.
world = Matrix.RotationYawPitchRoll(Yaw, Pitch, 0);
world.TranslationVector = pos;
Matrix.Invert(ref world, out view);
The world matrix is sent to the shader as the InverseView matrix. As for the coordinate system, we originally used XNA and then switched over to SharpDX, so to avoid having to change a ton of stuff, we stuck with the XNA system where Y=up and we use right-handed matrices.
You invert the camera orientation, but not the view matrix. The view matrix is already the inverted camera orientation, therefor you need to inverting it a second time. So , try just this
view = Matrix.RotationYawPitchRoll(Yaw, Pitch, 0);
view.TranslationVector = pos;
(invert*invert=identity)
Yes. The view matrix is the inverted camera world matrix. To get from view space to world space you need to invert the view matrix which is the camera world matrix
An other interpretation:
A pixel(voxel) in view space is like a vertex of a model (object space). You need the world matrix of the object(in this case the camera) to transform in into world space, much like a vertex of a model.
Ok, got a little bit messy
Missed this
The world matrix is sent to the shader as the InverseView matrix
Please validate the following, for shadowmapping you need to transform a point from view space to light space:
vs = view space
ws = world space
ls = light space
ts = texture space
point_vs == invert(view_matrix)==> point_ws == light_view_matrix ==> point_ls == light_projection ==> point_ts
Note:
invert(view_matrix) = invert( invert(camera_world_matrix))=camera_world_matrix
Then check the view point reconstruction, try to ensure, that position reconsturction + view->world space transformation works first.
float pixelDepth = DepthMap.Sample(DepthMapSampler, input.TexCoord).r;
float4 point_vs = float4(pixelDepth * input.FrustumCornerVS, 1.0f);
float4 point_ws = mul(point_vs, InverseView);
float4 point_ls = mul(point_ws, lightView);
float4 point_ts = mul(point_ls, lightProjection);
float4 positionLight = point_ts;
The result is still the same.Going back to the player's camera's matrices: I construct the camera world matrix using the Matrix.RotationYawPitchRoll (I think you might have been confused before by that function, it returns a 4x4 rotation matrix) and then invert it to get the view matrix. It's easier for me that way, but I've also tested using Matrix.LookAtRH to construct the view matrix and have confirmed it is (nearly) the same as the inverted world matrix. There's less than a 0.1 difference between the values, which I'm pretty sure is just the result of floating point math.
That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere. I'll see if I can get a video of it and post it up so you can see what it's doing easier.