Cascaded Shadow Map Issue

Started by
46 comments, last by riuthamus 11 years, 2 months ago

That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere.

I hope that it is not the reason, but I need to check ti myself later on. The test shader is just a color encoding of the world position of the accoding pixel, transformed from view to world space. I hope there's no bug in it or a missinterpretation, a video might help here.

For testing I would try to display the single steps (point_vs, point_ws, ...) as colors and get a feeling if there might be a bug.

Have you already any other shadowmap code which works ? Or is this your first try ?

Advertisement

So, here are three shots from my engine. The "crosshair" is centered at the boulder, even when moving the camera, the crosshair should not move.

Code:


vec4 t = (camera_world * vec4(position_vs,1)) - camera_world_position;
vec3 tmp_color = step(vec3(0),t.xyz)*0.5+0.5;

// use tmp_color to tint pixel

gnoblins201302191921146.jpg

gnoblins201302191921181.jpggnoblins201302191921219.jpg

Heres the video of what the test shader is doing:

">

That the color pattern is moving is a hint, that there's something wrong with either

- color encoding of the world coords wink.png

- position recostruction in view space

- InverseView calculation

I would even take a step back and debug the view position recosturction, e.g. encode the view position like this


vec3 position_vs = ...
color.rg = step(0,position.xy);
color.b = position.z / far_clip_plane;


This should look like a colored depth map (blue channel is depth) and 4 equally sized quadrants with different colors. If you move or rotate the camera, the quadrants must be fix, only the blue channel should change.

Well that one seems to work just fine:

[attachment=13742:RuinValor 2013-02-20 02-50-22-47.png]

Doesn't rotate with the camera. I can't really see much happening/changing with the blue channel though.

Well, this is at least a start. Check if you write the linear depth to the g-buffer.

Ok, back to the inverse view matrix. I don't know XNA/SlimDX, but have you ensured, that you upload the matrix in the right order (column-major) ? Check this out by switching

float4 point_ws = mul(point_vs, InverseView);

to

float4 point_ws = mul(InverseView,point_vs);
or use the transpose of inverseView

when doing the first color encoding test

Yea, its always been mul(pos, matrix) for all the shaders. Swapping them doesn't fix it.

The player depth is written like this:

VSDepthOutput VS_Depth(VertexShaderInput input)
{
    VSDepthOutput output;

    float4 worldPosition = mul(input.Position, World);
    float4 cameraPosition = mul(worldPosition, View);
    float4 position = mul(cameraPosition, Projection);

    output.Position = position;
    output.Depth = cameraPosition.z;

    return output;
}
 
float PS_Depth(VSDepthOutput input) : SV_TARGET0
{
    float depth = -input.Depth / FarPlane;

    return depth;
}

Well, I'm slowly running out of ideas. Debugging SM or skeletal animation system is always hard.

You already have the world position in the VS shader, that is good for debugging purpose. Transfer the world position to the pixel shader and output the difference between world position and back transformed pixels:


position_ws_VS = .. // transfered from vertex shader;
position_vs = .. // reconstructed
position_ws = mul(position_vs,InverseView);
delta = abs(position_ws_VS-position_ws);
color = clamp(0,1,delta * SCALE);

The error should be really small.

What do you mean I already have the world position in the VS shader? The shadow shader doesn't have it, its a fullscreen quad shader. And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

But should not be an issue. Just change the color of the outgoing pixel to the error of the reconstruction.

Btw. you don't need to reconstruct the point in viewspace by using the interpolated FrustumCornerVS, you can do it directly from the screencoord (gl_fragcoord in GLSL, I'm sure there's an equipvalent in HLSL) and by the size of the near plane in world coords:


screencoord = ... //given in pixel shader [0..1], if not normalized, see below
screensize_ws = .. //screenwidth in world space, can contain the 1/screenwidth and 1/screenheigh in pixel if screencoord is not normalized
near_ws = .. // near plane distance from center of projection
far_ws = .. // far plane distance from center of projection
depth = .. // depth [0..1] between center of projection and far plane

// update screencoord
screencoord.xy = screencoord.xy * 2.0 - 1.0 ; // ->[-1..0..1] 
position_vs.xy = (screencoord.xy * screensize_ws.xy * 0.5) * (far_ws*depth/near_ws);
position_vs.z = far_ws*depth;

Note: you need to carefully integrate this into your engine. I don't know how your engine works, therefor try to understand of what I want to tell you before integrating it into the engine. You must most likely modify the code , 1:1 will not work all the time.

E.g. is your depth the normalized difference between near and far clip plane or the distance between center of projection and far ?

E.g. you negate the depth in your VS shader, do you consider this when doing the reconstruction ?

This topic is closed to new replies.

Advertisement