Jump to content
  • Advertisement
Sign in to follow this  
guoxx

Reconstruct world space position from depth buffer

This topic is 1088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I know there are many ways to reconstruct world space position from depth buffer, but how unreal engine do it really confuse me.
 
Here are source code from usf file

    float SceneDepth = CalcSceneDepth(UV);
    float3 PositionTranslatedWorld = mul( float4( ScreenPos * SceneDepth, SceneDepth, 1 ), View.ScreenToTranslatedWorld ).xyz;

 
I think SceneDepth here is the linear depth, ScreenPos is the position in screen space coordinate range from [-1, 1], and translated world it's some coordinate like world space, we can treat it same as world space.
 
This code really confused me, as I know a simple way to construct depth is something like this

    float postProjectedDepth = depthBuffer.SampleLevel(depthBufferSampler, UV, 0);
    float3 ndcPosition = float3(ScreenPos, postProjectedDepth);
    float4 tempPosition = mul(float4(ndcPosition, 1), InvViewProjectionMatrix);
    float3 WSPosition = tempPositon.xyz / tempPosition.w;

 
It's anyone explain to me why the code from unreal engine works?

Edited by guoxx

Share this post


Link to post
Share on other sites
Advertisement


It's anyone explain to me why the code from unreal engine works?

The same way your code does, roughly. They've pre-multiplied the ScreenPos by the depth to avoid the divide by w, but otherwise it's exactly the same.

Share this post


Link to post
Share on other sites

 


It's anyone explain to me why the code from unreal engine works?

The same way your code does, roughly. They've pre-multiplied the ScreenPos by the depth to avoid the divide by w, but otherwise it's exactly the same.

 

 

Thanks for you help. But I can't derivate this code from perspective of math. Can you help to explain how this code works?

Why pre-multiplied the ScreenPos by the depth to avoid the divide by w?

Share this post


Link to post
Share on other sites

It's actually fairly simple when you look at it on a drawing. Figure 1. on this page shows it well..

 

http://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/opengl-perspective-projection-matrix

 

You got the eye point at A, the center of the near plane at B and the screen position at C. From this you can see that finding the position in view space from a given depth is just a scale factor applied = (E - A) / (B - A). And from the view space position you can transform back to world space..

 

So tl;dr - here's my code

 

On the CPU

 

// Inverse re-projection setup to reconstruct view space position from linear depth
M44 mProjectionI = MInverse(mProjection);
V4 vSS = {1,-1, 1, 1}; // y inverse due to screen space
V4 vVS = VTransform44(mProjectionI, vSS);
vVS = VNormalize4E(vVS); // Need exact to keep precision
vVS = VDiv(vVS, VZZZZ(vVS));
pPerViewData->vDepthScaleXY = VSet(vVS.x(), vVS.y(), 1.0f / vVS.x(), 1.0f / vVS.y());
 
On the GPU
 
vPositionViewSpace.z = depth; (depth is negative)
vPositionViewSpace.xy = vScreenCoords * vPosition.z * cbSharedPerView.vDepthScaleXY.xy;
 
Where vScreenCoords are [-1;1]
 
Hope this helps a bit.
Henning

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!