• Advertisement
Sign in to follow this  

recovering world position from depth 2

This topic is 4170 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi guys! After more experimentation with the procedure of trying to recover world space position from view space depth, I realised that my calculations are still off by a scaling factor. I believe it has got to do with the frustum. With regards to the slides on CryEngine2, unfortunately I couldnt seem to understand how they could interpolate the distances from the camera position to the 4 corners of the far clipping plane. Do they do it in the vertex shader and let the values interpolate across the quad? What if all the distances at the 4 corners are the same? anyone here has done the same thing? or at least gimme a clue. thx! Edwin

Share this post


Link to post
Share on other sites
Advertisement
Not exactly sure, if this is what you are looking for, but here is my code snippet to transform point from window coordinates to world coordinates. To find the corners of frustum, just start from normalized device coordinate step (the corners form cube (-1,-1,-1) to (1,1,1) in ND coordinates.

// Ceye = ModelView X Cobj
// Cclip = Projection X Ceye
// Cnormdev = Cclip / perspective
// Cwindow = Cnormdev viewport transformation
// Window coordinates
float Xw = x;
float Yw = y;
float Zw = 1;
// Normalized device coordinates
float Xd = 2 * Xw / _width - 1;
float Yd = 2 * Yw / _height - 1;
float Zd = 2 * Zw - 1;
// Clip coordinates
// We have to calculate reverse transformation
FLG::Matrix4 proj = _view->getProjection ();
FLG::Matrix4 rproj;
proj.invert (rproj);
// Final W has to be 1
// r[3] * Xc + R[7] * Yc + R[11] * Zc + r[15] * Wc = 1
// r[3] * Xd * Wc + R[7] * Yd * Wc + R[11] * Zd * Wc + r[15] * Wc = 1
// Wc * (r[3] * Xd + R[7] * Yd + R[11] * Zd + r[15]) = 1
// Wc = 1 / (r[3] * Xd + R[7] * Yd + R[11] * Zd + r[15]);
float Wc = 1 / (rproj[3] * Xd + rproj[7] * Yd + rproj[11] * Zd + rproj[15]);
float Xc = Xd * Wc;
float Yc = Yd * Wc;
float Zc = Zd * Wc;
// Eye coordinates
FLG::Vector4 Ce = rproj.transform4Vector (FLG::Vector4(Xc, Yc, Zc, Wc));
// World coordinates
FLG::Matrix4 v2w = _view->getViewToWorld ();
FLG::Vector4 Cw = v2w.transform4Vector (Ce);
// Now we have 4-dimensional point

Regards,
Lauris

Share this post


Link to post
Share on other sites
Thats not what he means in this paper ( If anyone else cares: this is what we are referring to: http://www.ati.com/developer/siggraph06/Wenzel-Real-time_Atmospheric_Effects_in_Games.pdf )

I believe he means this:
For all 4 corner points of the frustum, determine the vector campos -> far frustum corner.
Now he assumes you have a linearized z value [0,1] from some texture map you generated earlier. Now you scale this linearized z value by the interpolated vector, and now you have for each pixel, the vector cam->point in scene. When you add the campos now, you get the worldpos for each pixel.

He says in the paper that he stores the _distance_ from the camera in each frustum corner. Though I cant see how that makes sense.

What I explained above should approximately work though.

Share this post


Link to post
Share on other sites
Hi guys!
I tried doing it but i am still not getting a correct world position. However, i did managed to verify that the world-z coord is at least correct and accurately close to that given by the inverse view matrix.

so far i got:
half3 worldPosition = g_EyeWorldPosition + (eyeViewPosition.z / 99.99) * IN.frustumCornerDir.xyz;

where eyeViewPosition.z is the viewspace depth, 99.99 is the frustum far plane distance, and the frustumCornerDir is the interpolated directional vector across the quad.

Share this post


Link to post
Share on other sites
Mhh I dont quite get what your eyeViewPosition.z is supposed to be? Where does it come from?

Share this post


Link to post
Share on other sites
Its actually the depth in view space (or camera space) that is calculated in the vertex shader, then interpolated across the quad into the pixelshader.

half4 viewspace_position = mul(half4(IN.vertex_position.xyz,1.0),worldViewMatrix);

Share this post


Link to post
Share on other sites
Why are you trying to do it with vector math instead of using the inverse view matrix?

Given the projection frustum's near and far planes, the X/Y field of view, and the viewspace depth, you should be able to turn a screen-space XY into a view-space XYZ. Then you just put that XYZ (along with a w = 1) through the inverse view matrix.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Just interpolate direction from camera origin to frustum far
plane corners over the screen quad (MUST be correct length - NOT normalized). In pixel shader scale this interpolated "eye" direction by normalized depth sampled from texture + add camera origin (should compile to one MAD instruction) and you get world pos.

Hope this helps
Petr

Share this post


Link to post
Share on other sites
Ah i found my bug, happened to use full fov instead of fov/2.
Then again, i realise that this way of doing it may not be always better than using the inverse matrix. Imagine if you need to convert to clipspace to compute texture coords for another process that uses the same viewspace depth.

thx to all who replied!
Edwin

Share this post


Link to post
Share on other sites
Quote:
Original post by superpig
Why are you trying to do it with vector math instead of using the inverse view matrix?

Well the slides mention speed as the reason. I have never tested how big the advantage of vector vs matrix math is in this case. But it may be measurable because this code is executed for every fragment of a fullscreen quad.
Mathematically, just taking the inverse matrix would have been easier of course ;)

Share this post


Link to post
Share on other sites
well i can think of another though. If we do not need the full XYZ components of the viewspace coords, we can simply store only the Z component (which is the viewspace depth) and use a R32F surface format instead of a full 16bit floating point surface.

thx again!
Edwin

Share this post


Link to post
Share on other sites
Quote:
Original post by Dtag
Quote:
Original post by superpig
Why are you trying to do it with vector math instead of using the inverse view matrix?

Well the slides mention speed as the reason. I have never tested how big the advantage of vector vs matrix math is in this case. But it may be measurable because this code is executed for every fragment of a fullscreen quad.
Mathematically, just taking the inverse matrix would have been easier of course ;)
As a straight comparison, an inverse matrix transform:

xWorldspacePosition = mul(xViewspacePosition, xInverseViewMatrix)

expands to four dp4 instructions, though usually at least one of them will get optimized away because you don't use that component of the result (e.g. w).
Petr's approach only uses a single mad instruction but requires an extra interpolator. Depends on what you're tight on, I guess.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement