View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

47 replies to this topic

### #21Telanor  Members

Posted 18 February 2013 - 03:47 AM

Those are shots from just the rotation of the camera. So I guess that means the InverseView is broken...? Not really sure how that can be the case, there are other places where the InverseView is used and they don't have any problems.

### #22Ashaman73  Members

Posted 18 February 2013 - 05:41 AM

The inverse view matrix should convert your view space into worldspace and in world space the positions of your pixels are fix, therefor the rotation of your camera (camera position is fix) should not change them. So, what is your view matrix and how do you calculate its inverse ?

Ashaman

### #23CC Ricers  Members

Posted 18 February 2013 - 11:51 AM

There have been many suggestions here already so I'm just going out on a limb to say this- have you accounted for XNA's coordinate system orientation? You said that you are adapting an XNA code sample for use in DirectX/SharpDX. XNA uses a right-handed coordinate system while DirectX's is left-handed, so the Z values are flipped the other way around. This could cause odd behavior in rotation matrices when you are applying XNA code as-is.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

### #24Telanor  Members

Posted 18 February 2013 - 04:44 PM

Here's how I do the calculation:
world = Matrix.RotationYawPitchRoll(Yaw, Pitch, 0);
world.TranslationVector = pos;

Matrix.Invert(ref world, out view);
The world matrix is sent to the shader as the InverseView matrix. As for the coordinate system, we originally used XNA and then switched over to SharpDX, so to avoid having to change a ton of stuff, we stuck with the XNA system where Y=up and we use right-handed matrices.

### #25Ashaman73  Members

Posted 19 February 2013 - 01:15 AM

You invert the camera orientation, but not the view matrix. The view matrix is already the inverted camera orientation, therefor you need to inverting it a second time. So , try just this

view = Matrix.RotationYawPitchRoll(Yaw, Pitch, 0);
view.TranslationVector = pos;

(invert*invert=identity)

Edited by Ashaman73, 19 February 2013 - 01:20 AM.

Ashaman

### #26Telanor  Members

Posted 19 February 2013 - 01:18 AM

I'm a bit confused. Isn't that the calculation for a camera's world matrix?

Edited by Telanor, 19 February 2013 - 01:29 AM.

### #27Ashaman73  Members

Posted 19 February 2013 - 01:57 AM

Yes. The view matrix is the inverted camera world matrix. To get from view space to world space you need to invert the view matrix which is the camera world matrix

An other interpretation:

A pixel(voxel) in view space is like a vertex of a model (object space). You need the world matrix of the object(in this case the camera) to transform in into world space, much like a vertex of a model.

Edited by Ashaman73, 19 February 2013 - 02:00 AM.

Ashaman

### #28Telanor  Members

Posted 19 February 2013 - 01:59 AM

But isn't that what I'm already doing? Your suggestion seems like it would switch the world and view matrices

### #29Ashaman73  Members

Posted 19 February 2013 - 02:11 AM

Ok, got a little bit messy

Missed this

The world matrix is sent to the shader as the InverseView matrix

Please validate the following, for shadowmapping you need to transform a point from view space to light space:

vs = view space
ws = world space
ls = light space
ts = texture space

point_vs == invert(view_matrix)==> point_ws == light_view_matrix ==> point_ls == light_projection ==> point_ts

Note:
invert(view_matrix) = invert( invert(camera_world_matrix))=camera_world_matrix

Then check the view point reconstruction, try to ensure, that position reconsturction + view->world space transformation works first.

Edited by Ashaman73, 19 February 2013 - 02:23 AM.

Ashaman

### #30Telanor  Members

Posted 19 February 2013 - 04:52 AM

If I'm understanding correctly, it does seem like I'm doing exactly what you're describing. Just to be sure, I changed the code to break down the steps a bit more:
float pixelDepth = DepthMap.Sample(DepthMapSampler, input.TexCoord).r;
float4 point_vs = float4(pixelDepth * input.FrustumCornerVS, 1.0f);
float4 point_ws = mul(point_vs, InverseView);
float4 point_ls = mul(point_ws, lightView);
float4 point_ts = mul(point_ls, lightProjection);
float4 positionLight = point_ts;
The result is still the same.

Going back to the player's camera's matrices: I construct the camera world matrix using the Matrix.RotationYawPitchRoll (I think you might have been confused before by that function, it returns a 4x4 rotation matrix) and then invert it to get the view matrix. It's easier for me that way, but I've also tested using Matrix.LookAtRH to construct the view matrix and have confirmed it is (nearly) the same as the inverted world matrix. There's less than a 0.1 difference between the values, which I'm pretty sure is just the result of floating point math.

That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere. I'll see if I can get a video of it and post it up so you can see what it's doing easier.

### #31Ashaman73  Members

Posted 19 February 2013 - 05:59 AM

That said, the test shader you gave me doesn't seem to behave the way you described, so there does seem to be an issue here somewhere.

I hope that it is not the reason, but I need to check ti myself later on. The test shader is just a color encoding of the world position of the accoding pixel, transformed from view to world space. I hope there's no bug in it or a missinterpretation, a video might help here.

For testing I would try to display the single steps (point_vs, point_ws, ...) as colors and get a feeling if there might be a bug.

Have you already any other shadowmap code which works ? Or is this your first try ?

Ashaman

### #32Ashaman73  Members

Posted 19 February 2013 - 12:31 PM

So, here are three shots from my engine. The "crosshair" is centered at the boulder, even when moving the camera, the crosshair should not move.

Code:

vec4 t = (camera_world * vec4(position_vs,1)) - camera_world_position;
vec3 tmp_color = step(vec3(0),t.xyz)*0.5+0.5;

// use tmp_color to tint pixel

Ashaman

### #33Telanor  Members

Posted 19 February 2013 - 05:35 PM

Heres the video of what the test shader is doing:

### #34Ashaman73  Members

Posted 20 February 2013 - 12:31 AM

That the color pattern is moving is a hint, that there's something wrong with either

- color encoding of the world coords

- position recostruction in view space

- InverseView calculation

I would even take a step back and debug the view position recosturction, e.g. encode the view position like this

vec3 position_vs = ...
color.rg = step(0,position.xy);
color.b = position.z / far_clip_plane;

This should look like a colored depth map (blue channel is depth) and 4 equally sized quadrants with different colors. If you move or rotate the camera, the quadrants must be fix, only the blue channel should change.

Ashaman

### #35Telanor  Members

Posted 20 February 2013 - 01:52 AM

Well that one seems to work just fine:

Doesn't rotate with the camera.  I can't really see much happening/changing with the blue channel though.

### #36Ashaman73  Members

Posted 20 February 2013 - 02:13 AM

Well, this is at least a start. Check if you write the linear depth to the g-buffer.

Ok, back to the inverse view matrix. I don't know XNA/SlimDX, but have you ensured, that you upload the matrix in the right order (column-major) ? Check this out by switching

float4 point_ws = mul(point_vs, InverseView);

to

float4 point_ws = mul(InverseView,point_vs);
or use the transpose of inverseView

when doing the first color encoding test

Edited by Ashaman73, 20 February 2013 - 02:14 AM.

Ashaman

### #37Telanor  Members

Posted 20 February 2013 - 02:20 AM

Yea, its always been mul(pos, matrix) for all the shaders.  Swapping them doesn't fix it.

The player depth is written like this:

{
VSDepthOutput output;

float4 worldPosition = mul(input.Position, World);
float4 cameraPosition = mul(worldPosition, View);
float4 position = mul(cameraPosition, Projection);

output.Position = position;
output.Depth = cameraPosition.z;

return output;
}

float PS_Depth(VSDepthOutput input) : SV_TARGET0
{
float depth = -input.Depth / FarPlane;

return depth;
}

Edited by Telanor, 20 February 2013 - 02:22 AM.

### #38Ashaman73  Members

Posted 20 February 2013 - 03:41 AM

Well, I'm slowly running out of ideas. Debugging SM or skeletal animation system is always hard.

You already have the world position in the VS shader, that is good for debugging purpose. Transfer the world position to the pixel shader and output the difference between world position and back transformed pixels:

position_ws_VS = .. // transfered from vertex shader;
position_vs = .. // reconstructed
position_ws = mul(position_vs,InverseView);
delta = abs(position_ws_VS-position_ws);
color = clamp(0,1,delta * SCALE);

The error should be really small.

Ashaman

### #39Telanor  Members

Posted 20 February 2013 - 03:48 AM

What do you mean I already have the world position in the VS shader?  The shadow shader doesn't have it, its a fullscreen quad shader.  And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

### #40Ashaman73  Members

Posted 20 February 2013 - 04:43 AM

And the depth shader doesn't have the camera frustum info needed to do the reconstruction.

But should not be an issue. Just change the color of the outgoing pixel to the error of the reconstruction.

Btw. you don't need to reconstruct the point in viewspace by using the interpolated FrustumCornerVS, you can do it directly from the screencoord (gl_fragcoord in GLSL, I'm sure there's an equipvalent in HLSL) and by the size of the near plane in world coords:

screencoord = ... //given in pixel shader [0..1], if not normalized, see below
screensize_ws = .. //screenwidth in world space, can contain the 1/screenwidth and 1/screenheigh in pixel if screencoord is not normalized
near_ws = .. // near plane distance from center of projection
far_ws = .. // far plane distance from center of projection
depth = .. // depth [0..1] between center of projection and far plane

// update screencoord
screencoord.xy = screencoord.xy * 2.0 - 1.0 ; // ->[-1..0..1]
position_vs.xy = (screencoord.xy * screensize_ws.xy * 0.5) * (far_ws*depth/near_ws);
position_vs.z = far_ws*depth;

Note: you need to carefully integrate this into your engine. I don't know how your engine works, therefor try to understand of what I want to tell you before integrating it into the engine. You must most likely modify the code , 1:1 will not work all the time.

E.g. is your depth the normalized difference between near and far clip plane or the distance between center of projection and far ?

E.g. you negate the depth in your VS shader, do you consider this when doing the reconstruction ?

Edited by Ashaman73, 20 February 2013 - 04:55 AM.

Ashaman

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.