Reconstructing Position from Depth Data

Started by
14 comments, last by csisy 12 years, 4 months ago
Hi, I am on my quest to figure out the fastest way to reconstruct a position value from depth data. Here is what I know: 1. If you stay in view space and you can afford a dedicated buffer for a separate depth value you can do the following (see article in ShaderX5 "Overcoming Deferred Shading Drawbacks") Store the position of the pixel in view space in a buffer like this G_Buffer.z = length(Input.PosInViewSpace); Then you can retrieve the position in view space from this buffer like this: vertex shader: outEyeToScreen = float3(InputScreenPos.x * ViewAspect, Input.ScreenPos.y, inTanHalfFOV); pixel shader: float3 PixelPos = normalize(Input.vEyeToScreen) * G_Buffer.z; This is nice because the cost per light is really low. If you do not have space to store a dedicated depth buffer just for this you might have to read the available depth buffer (this is now also possible on PC cards). Additionally this is only in view space ... if you like world space more there would be another transform necessary. 2. Read Depth buffer and re-construct world space position: FLOAT3 screenPos; screenPos = FLOAT3(PositionXY, gCurrDepth); FLOAT4 worldPos4 = mul(FLOAT4(screenPos, 1.0f), WorldViewProjInverse); worldPos4.xyz /= worldPos4.w; This is cool as long as you can live with the transform in there and you just read only G_Buffer data. I believe I presented this a few times in this forum. So now the question: is there something faster to reconstruct world space position values from the depth buffer.
Advertisement
Well in the general case of having an arbitrary view matrix, getting back to world space is going to involve something of the complexity of a matrix multiply of course. If you're willing to live with view space though of course you can simply use your second example but factor out the matrix multiply to get rid of all of the 0's in the inverse projection matrix (which is of similar complexity to the projection matrix... i.e. only about 5 non-zero elements). So you should be able to do it with something like 4 multiplies and a MADD for a typical perspective projection matrix, plus the divide by w of course.

[Edit] This page has the formulation.
Thanks AndyTX for getting back to me regarding this.
Quote:Original post by wolf
vertex shader: outEyeToScreen = float3(InputScreenPos.x * ViewAspect, Input.ScreenPos.y, inTanHalfFOV);

pixel shader: float3 PixelPos = normalize(Input.vEyeToScreen) * G_Buffer.z;


I just use the corresponding far-corner as the outEyeToScreen, then the length of it is guaranteed to be the far clip distance - in such a case, you can get rid of the normalize() and just use eyeToScreen * depth * oneOverFarClipDistance;
Hey agi_shi,
this sounds cool. Can you provide source or pseudo code?
I did not understand what you mean.

- Wolf
wolf: You may find this thread useful Reconstructing pixel 3D position from depth and more specifically this post by MJP.

You can also take a look at this presentation : Real-time Atmospheric Effects in Games Revisited (slide 12).

HellRaiZer
HellRaiZer
Wolf, as you mentioned, in the article I first computed the eye vector in viewspace and then in the pixel shader, just multiplied by the depth value in order to get the original viewspace position.

Luckly, retrieving the world space position is very similar, you can either do as you point by multiplying the view position by the ViewInverse which place a burden in the pixel shader or you can move that math into the vertex shader.

So in the VS:
outEyeToScreen = float3(p.x*TanHalfFOV*ViewAspect, p.y*TanHalfFOV, 1)outWorldEye = mul( outEyeToScreen, (float3x3)matViewInv );


and in the PS:
float3 WorldPos = vWorldEye*depth + EyePos


That way in the pixel shader you just need to perform a single mad in order to compute the world space position.

(Be aware that I changed a bit the EyeRay formulae in order to avoid the normalize on the PS. And the Depth Value is not computed from the length(ViewPos) but ViewPos.z which is also faster to compute)

If that doesn’t work for you, I can post the HLSL source I’m using to perform Point Lighting in DS which also gets the depth value from the ZBuffer instead of from the GBuffer and computes the ScreenPos from the light volume positions so you don't need to set a Vertex format with Position + Texcoord1.

Hope it helps.

[Edited by - fpuig on September 1, 2008 12:29:28 PM]
God is Real unless is declared Integer
Quote:Original post by wolf
Hey agi_shi,
this sounds cool. Can you provide source or pseudo code?
I did not understand what you mean.

- Wolf


I got the idea for the exact method from MJP, but basically it goes like this:

- store view-space far-plane corners in normals attribute (or whatever) of full-screen quad
// top-leftposition(vec3(-1, 1, 0));normal(topLeftFarCorner);// bottom-leftposition(vec3(-1, -1, 0));normal(bottomLeftFarCorner);// bottom-rightposition(vec3(1, -1, 0));normal(bottomRightFarCorner);// top-rightposition(vec3(1, 1, 0));normal(topRightFarCorner);


- use this as the 'eyeToScreen' or 'screenToEye' ray
ray = gl_Normal;


- store unclamped/unnormalized depth
gl_FragColor = vec4(length(viewSpacePosition.xyz), 0, 0, 0);


- retrieve view-space position
vec3 viewSpacePosition = normalize(ray) * depth;


- optimize the normalize() since we know that length(ray) == camera far clip distance
vec3 viewSpacePosition = ray * depth / farClipDistance;

or
vec3 viewSpacePosition = ray * depth * oneOverFarClipDistance;


- OR, instead of normalizing in the screen-space shader, you can normalize the depth before storing it
gl_FragColor = vec4(length(viewSpacePosition.xyz) / farClipDistance, 0, 0, 0);

and since the depth is already normalized
vec3 viewSpacePosition = ray * depth;
Thanks to all for your help. I raised your rating ...

fpuig: I think a lot of people here would be interested in seeing you source code :-)
Yeah...interpolating the position of the frustum corners and multiplying with normalized view-space Z is still the fastest way I know of getting a view-space position. If you use the frustum corners in world-space and add the camera position after multiplying depth * frustumCorner, you get world-space (as in fpuig's code). Unfortunately this required view-space Z divided by the camera's farZ, so if you're not manually laying out a depth buffer you'd have to do some conversion to get this value from a regular Z-buffer.

BTW I should note that I originally got the technique from this presentation by Carsten Wenzel.

This topic is closed to new replies.

Advertisement