This topic is 1796 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm a bit confused. Isn't that the calculation for a camera's world matrix? Edited by Telanor

##### Share on other sites

Yes. The view matrix is the inverted camera world matrix. To get from view space to world space you need to invert the view matrix which is the camera world matrix

An other interpretation:

A pixel(voxel) in view space is like a vertex of a model (object space). You need the world matrix of the object(in this case the camera) to transform in into world space, much like a vertex of a model.

Edited by Ashaman73

##### Share on other sites
But isn't that what I'm already doing? Your suggestion seems like it would switch the world and view matrices

##### Share on other sites

Ok, got a little bit messy

Missed this

The world matrix is sent to the shader as the InverseView matrix

Please validate the following, for shadowmapping you need to transform a point from view space to light space:

vs = view space
ws = world space
ls = light space
ts = texture space

point_vs == invert(view_matrix)==> point_ws == light_view_matrix ==> point_ls == light_projection ==> point_ts

Note:
invert(view_matrix) = invert( invert(camera_world_matrix))=camera_world_matrix


Then check the view point reconstruction, try to ensure, that position reconsturction + view->world space transformation works first.

Edited by Ashaman73

##### Share on other sites
If I'm understanding correctly, it does seem like I'm doing exactly what you're describing. Just to be sure, I changed the code to break down the steps a bit more:
float pixelDepth = DepthMap.Sample(DepthMapSampler, input.TexCoord).r;
float4 point_vs = float4(pixelDepth * input.FrustumCornerVS, 1.0f);
float4 point_ws = mul(point_vs, InverseView);
float4 point_ls = mul(point_ws, lightView);
float4 point_ts = mul(point_ls, lightProjection);
float4 positionLight = point_ts;

The result is still the same.

Going back to the player's camera's matrices: I construct the camera world matrix using the Matrix.RotationYawPitchRoll (I think you might have been confused before by that function, it returns a 4x4 rotation matrix) and then invert it to get the view matrix. It's easier for me that way, but I've also tested using Matrix.LookAtRH to construct the view matrix and have confirmed it is (nearly) the same as the inverted world matrix. There's less than a 0.1 difference between the values, which I'm pretty sure is just the result of floating point math.

That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere. I'll see if I can get a video of it and post it up so you can see what it's doing easier.

##### Share on other sites

That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere.

I hope that it is not the reason, but I need to check ti myself later on. The test shader is just a color encoding of the world position of the accoding pixel, transformed from view to world space. I hope there's no bug in it or a missinterpretation, a video might help here.

For testing I would try to display the single steps (point_vs, point_ws, ...) as colors and get a feeling if there might be a bug.

Have you already any other shadowmap code which works ? Or is this your first try ?

##### Share on other sites

So, here are three shots from my engine. The "crosshair" is centered at the boulder, even when moving the camera, the crosshair should not move.

Code:

vec4 t = (camera_world * vec4(position_vs,1)) - camera_world_position;
vec3 tmp_color = step(vec3(0),t.xyz)*0.5+0.5;

// use tmp_color to tint pixel



##### Share on other sites

Heres the video of what the test shader is doing:

">

##### Share on other sites

That the color pattern is moving is a hint, that there's something wrong with either

- color encoding of the world coords

- position recostruction in view space

- InverseView calculation

I would even take a step back and debug the view position recosturction, e.g. encode the view position like this

vec3 position_vs = ...
color.rg = step(0,position.xy);
color.b = position.z / far_clip_plane;



This should look like a colored depth map (blue channel is depth) and 4 equally sized quadrants with different colors. If you move or rotate the camera, the quadrants must be fix, only the blue channel should change.

##### Share on other sites

Well that one seems to work just fine:

[attachment=13742:RuinValor 2013-02-20 02-50-22-47.png]

Doesn't rotate with the camera.  I can't really see much happening/changing with the blue channel though.

##### Share on other sites

Well, this is at least a start. Check if you write the linear depth to the g-buffer.

Ok, back to the inverse view matrix. I don't know XNA/SlimDX, but have you ensured, that you upload the matrix in the right order (column-major) ? Check this out by switching

float4 point_ws = mul(point_vs, InverseView);

to

float4 point_ws = mul(InverseView,point_vs);
or use the transpose of inverseView

when doing the first color encoding test

Edited by Ashaman73

##### Share on other sites

Yea, its always been mul(pos, matrix) for all the shaders.  Swapping them doesn't fix it.

The player depth is written like this:

VSDepthOutput VS_Depth(VertexShaderInput input)
{
VSDepthOutput output;

float4 worldPosition = mul(input.Position, World);
float4 cameraPosition = mul(worldPosition, View);
float4 position = mul(cameraPosition, Projection);

output.Position = position;
output.Depth = cameraPosition.z;

return output;
}

float PS_Depth(VSDepthOutput input) : SV_TARGET0
{
float depth = -input.Depth / FarPlane;

return depth;
}

Edited by Telanor

##### Share on other sites

Well, I'm slowly running out of ideas. Debugging SM or skeletal animation system is always hard.

You already have the world position in the VS shader, that is good for debugging purpose. Transfer the world position to the pixel shader and output the difference between world position and back transformed pixels:

position_ws_VS = .. // transfered from vertex shader;
position_vs = .. // reconstructed
position_ws = mul(position_vs,InverseView);
delta = abs(position_ws_VS-position_ws);
color = clamp(0,1,delta * SCALE);


The error should be really small.

##### Share on other sites

What do you mean I already have the world position in the VS shader?  The shadow shader doesn't have it, its a fullscreen quad shader.  And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

##### Share on other sites

And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

But should not be an issue. Just change the color of the outgoing pixel to the error of the reconstruction.

Btw. you don't need to reconstruct the point in viewspace by using the interpolated FrustumCornerVS, you can do it directly from the screencoord (gl_fragcoord in GLSL, I'm sure there's an equipvalent in HLSL) and by the size of the near plane in world coords:

screencoord = ... //given in pixel shader [0..1], if not normalized, see below
screensize_ws = .. //screenwidth in world space, can contain the 1/screenwidth and 1/screenheigh in pixel if screencoord is not normalized
near_ws = .. // near plane distance from center of projection
far_ws = .. // far plane distance from center of projection
depth = .. // depth [0..1] between center of projection and far plane

// update screencoord
screencoord.xy = screencoord.xy * 2.0 - 1.0 ; // ->[-1..0..1]
position_vs.xy = (screencoord.xy * screensize_ws.xy * 0.5) * (far_ws*depth/near_ws);
position_vs.z = far_ws*depth;


Note: you need to carefully integrate this into your engine. I don't know how your engine works, therefor try to understand of what I want to tell you before integrating it into the engine. You must most likely modify the code , 1:1 will not work all the time.

E.g. is your depth the normalized difference between near and far clip plane or the distance between center of projection and far ?

E.g. you negate the depth in your VS shader, do you consider this when doing the reconstruction ?

Edited by Ashaman73

##### Share on other sites

position_vs.xy = (screencoord.xy * screensize_ws.wx) * (far_ws*depth/near_ws);

screensize_ws.wx?  Did you mean xy?

What range is screencoord supposed to be in?  DX gives it as the actual on screen pixel coordinate.  In my case screencoord.xy = x = 536.500000000, y = 308.500000000.  position_vs is coming out to a huge number after the multiplication.

Edited by Telanor

##### Share on other sites

I've corrected my previous post. Screensize is the screensize in world space divided by the screensize in pixels. Best to look up my previous post.

##### Share on other sites
I can't get the reconstructed position to even come close to the position_ws_VS.  I found something else out, however, while reading MJPs blog about reconstructing linear depth.  The GetCorners function in SharpDX returns the corners in a different order than XNA.  So I switched the indexes around and now the first shader output seems a little closer to what it should be (at least I think so...).  It no longer rotates with the camera, but it's still moving with it:

">
Edited by Telanor

##### Share on other sites

It no longer rotates with the camera, but it's still moving with it:

That's the desired behavior So, the world space transformation should work (better) now.  Have you tested the complete shadowmap pipeline with this fix ?

Edited by Ashaman73

##### Share on other sites
Yea I did test them. Assuming I didn't leave in any weird test stuff (I didn't see any), the shadows don't make any sense at all now:

[attachment=13771:RuinValor 2013-02-21 01-58-54-96.png]

With 4 cascades and a recognizable object:

[attachment=13772:RuinValor 2013-02-21 02-04-45-41.png]

Edit - This is what it looks like with the precision increased from 8 to 32 bits:

[attachment=13773:RuinValor 2013-02-21 02-14-06-47.png]

[attachment=13774:RuinValor 2013-02-21 02-14-15-86.png]

##### Share on other sites

the shadows don't make any sense at all now

It is hard to say, but you might have entered the next stage: fighting shadowmapping artifacts

Currently your scene could suffer from self-shadowing artifacts (or an other bug), but basically that could mean, that the transformation works. Best to use one recognizable shadow caster (e.g. sphere) and leave the rest only as shadow receiver (no caster).

precision increased from 8

8 bit depth buffer ?? You should use at least 24 bits (e.g. 2 half floats or 32f rendering target).

Edited by Ashaman73

##### Share on other sites
Increasing the shadow bias fixed the remaining issue:

[attachment=13775:RuinValor 2013-02-21 03-13-26-14.png]

So it's pretty much working now. I'm getting just about every other problem possible though. Peter panning, jittering when moving, shadows disappearing when the object is behind the camera, and some weird issue where a triangle of shadow pops in on the right hand side (you can see it in the screenshot). The last two issues are particularly problematic though.

Edit: Switching to our old ESM shadow filtering (instead of PCF from the sample) mostly fixes issues 1, 2, and possibly 4. 3rd one is still something I'd really like to fix. Edited by Telanor

##### Share on other sites

Just checking to see if anybody else thought of anything? Thanks again for this help, would like to wrap up this shadow crap! :P