• ### Announcements

• #### Wondering what's new and changed at GameDev.net?06/20/17

Check out the latest Staff Blog update that talks about what's changed, what's new, and what's up with these "Pixels".
Followers 0

# Reconstructing Position from Depth Data

## 15 posts in this topic

Hi, I am on my quest to figure out the fastest way to reconstruct a position value from depth data. Here is what I know: 1. If you stay in view space and you can afford a dedicated buffer for a separate depth value you can do the following (see article in ShaderX5 "Overcoming Deferred Shading Drawbacks") Store the position of the pixel in view space in a buffer like this G_Buffer.z = length(Input.PosInViewSpace); Then you can retrieve the position in view space from this buffer like this: vertex shader: outEyeToScreen = float3(InputScreenPos.x * ViewAspect, Input.ScreenPos.y, inTanHalfFOV); pixel shader: float3 PixelPos = normalize(Input.vEyeToScreen) * G_Buffer.z; This is nice because the cost per light is really low. If you do not have space to store a dedicated depth buffer just for this you might have to read the available depth buffer (this is now also possible on PC cards). Additionally this is only in view space ... if you like world space more there would be another transform necessary. 2. Read Depth buffer and re-construct world space position: FLOAT3 screenPos; screenPos = FLOAT3(PositionXY, gCurrDepth); FLOAT4 worldPos4 = mul(FLOAT4(screenPos, 1.0f), WorldViewProjInverse); worldPos4.xyz /= worldPos4.w; This is cool as long as you can live with the transform in there and you just read only G_Buffer data. I believe I presented this a few times in this forum. So now the question: is there something faster to reconstruct world space position values from the depth buffer.
0

##### Share on other sites
Well in the general case of having an arbitrary view matrix, getting back to world space is going to involve something of the complexity of a matrix multiply of course. If you're willing to live with view space though of course you can simply use your second example but factor out the matrix multiply to get rid of all of the 0's in the inverse projection matrix (which is of similar complexity to the projection matrix... i.e. only about 5 non-zero elements). So you should be able to do it with something like 4 multiplies and a MADD for a typical perspective projection matrix, plus the divide by w of course.

0

##### Share on other sites
Thanks AndyTX for getting back to me regarding this.
0

##### Share on other sites
Quote:
 Original post by wolfvertex shader: outEyeToScreen = float3(InputScreenPos.x * ViewAspect, Input.ScreenPos.y, inTanHalfFOV);pixel shader: float3 PixelPos = normalize(Input.vEyeToScreen) * G_Buffer.z;

I just use the corresponding far-corner as the outEyeToScreen, then the length of it is guaranteed to be the far clip distance - in such a case, you can get rid of the normalize() and just use eyeToScreen * depth * oneOverFarClipDistance;
0

##### Share on other sites
Hey agi_shi,
this sounds cool. Can you provide source or pseudo code?
I did not understand what you mean.

- Wolf
0

##### Share on other sites
Wolf, as you mentioned, in the article I first computed the eye vector in viewspace and then in the pixel shader, just multiplied by the depth value in order to get the original viewspace position.

Luckly, retrieving the world space position is very similar, you can either do as you point by multiplying the view position by the ViewInverse which place a burden in the pixel shader or you can move that math into the vertex shader.

So in the VS:
outEyeToScreen = float3(p.x*TanHalfFOV*ViewAspect, p.y*TanHalfFOV, 1)outWorldEye = mul( outEyeToScreen, (float3x3)matViewInv );

and in the PS:
float3 WorldPos = vWorldEye*depth + EyePos

That way in the pixel shader you just need to perform a single mad in order to compute the world space position.

(Be aware that I changed a bit the EyeRay formulae in order to avoid the normalize on the PS. And the Depth Value is not computed from the length(ViewPos) but ViewPos.z which is also faster to compute)

If that doesn’t work for you, I can post the HLSL source I’m using to perform Point Lighting in DS which also gets the depth value from the ZBuffer instead of from the GBuffer and computes the ScreenPos from the light volume positions so you don't need to set a Vertex format with Position + Texcoord1.

Hope it helps.

[Edited by - fpuig on September 1, 2008 12:29:28 PM]
0

##### Share on other sites
Quote:
 Original post by wolfHey agi_shi,this sounds cool. Can you provide source or pseudo code? I did not understand what you mean.- Wolf

I got the idea for the exact method from MJP, but basically it goes like this:

- store view-space far-plane corners in normals attribute (or whatever) of full-screen quad
// top-leftposition(vec3(-1, 1, 0));normal(topLeftFarCorner);// bottom-leftposition(vec3(-1, -1, 0));normal(bottomLeftFarCorner);// bottom-rightposition(vec3(1, -1, 0));normal(bottomRightFarCorner);// top-rightposition(vec3(1, 1, 0));normal(topRightFarCorner);

- use this as the 'eyeToScreen' or 'screenToEye' ray
ray = gl_Normal;

- store unclamped/unnormalized depth
gl_FragColor = vec4(length(viewSpacePosition.xyz), 0, 0, 0);

- retrieve view-space position
vec3 viewSpacePosition = normalize(ray) * depth;

- optimize the normalize() since we know that length(ray) == camera far clip distance
vec3 viewSpacePosition = ray * depth / farClipDistance;

or
vec3 viewSpacePosition = ray * depth * oneOverFarClipDistance;

- OR, instead of normalizing in the screen-space shader, you can normalize the depth before storing it
gl_FragColor = vec4(length(viewSpacePosition.xyz) / farClipDistance, 0, 0, 0);

and since the depth is already normalized
vec3 viewSpacePosition = ray * depth;
0

##### Share on other sites

fpuig: I think a lot of people here would be interested in seeing you source code :-)
0

##### Share on other sites
Yeah...interpolating the position of the frustum corners and multiplying with normalized view-space Z is still the fastest way I know of getting a view-space position. If you use the frustum corners in world-space and add the camera position after multiplying depth * frustumCorner, you get world-space (as in fpuig's code). Unfortunately this required view-space Z divided by the camera's farZ, so if you're not manually laying out a depth buffer you'd have to do some conversion to get this value from a regular Z-buffer.

BTW I should note that I originally got the technique from this presentation by Carsten Wenzel.
1

##### Share on other sites
Just a quick note, rendering the reference plane at Z=1 instead of the far plane will allow you to avoid divisions and normalizations all together.
0

##### Share on other sites
Quote:
 Yeah...interpolating the position of the frustum corners and multiplying with normalized view-space Z is still the fastest way I know of getting a view-space position.

Yes, performing the frustum interpolation is the fastest way and is the way that games like Crysis goes. However that technique is very restricted. As far as I can see it can only be applied to full screen quads as you need to set the corner rays manually. Also diferent viewaspects (that happens often when changing the viewport for diferent Render Targets) or the effect of changing the Field of View angle to get sniper zoom will need to recreate the full screen quat texcoord coordinates in order to math the new camera settings.
In order to perform Volumetric effects more efficiently is needed to render meshes enclosing the simulated volume (e.g. Omni sphere, Spot cone, etc). So computing the Frustum corners for each of the vertex in the volume is not a good idea.

A more general approach is to compute the EyeRay on the vertex shader by using the camera settings like (FOV, ViewAspect). The operations in the Pixel Shaders are the same like in the Frustum Corner's technique. But as rarelly the Vertex Shaders are the bottleneck then having an extra bit of math in order to be more general will not hurt. (Besides is just a couple of operations more)

The next code shows a shader that computes the WorldSpace position of the pixel from a Depth Map. That code was originally used for my omni lighting but I removed the lighting equations so the code gets easier to understand. Instead of needing a Full Screen Quad, you can render an sphere that only have Positions in its Vertex Format.
From the vertex positions, the code computes the vPos and EyeRay. If your target hardware is sm3, you can use the vPos semantic but as I want to stay in sm2 then is easy to compute it.

As the vPos and EyeRay are extracted from the projected Vertex Position, the equations should be divided by Out.Position.w, however as that doesn't work with the raster interpolation, I moved the division into the pixel shader.

//////////////////////////////////////////// Engine Variables/////////////////////////////    float4x4 matWorldViewProj : WorldViewProjection;    float4x4 matWorld         : World;    float4x4 matView          : View;    float3 EyePos             : CameraPosition;    float  TanHalfFOV         : TanHalfFOV;    float  ViewAspect         : ViewAspect;    float2 InvScreenDim       : InvScreenDimensions;    texture tRT_Depths : RT_Depths;    sampler RT_Depths = sampler_state {                             texture = <tRT_Depths>;                             MIPFILTER = POINT;    MINFILTER = POINT;    MAGFILTER = POINT;                        };struct VS_INPUT{    float4 Position    : POSITION;};struct VS_OUTPUT{    float4 Position    : POSITION;    float3 LightPos    : TEXCOORD0;    float4 vPos        : TEXCOORD1;    float3 vEyeRay     : TEXCOORD2;};float4 ConvertToVPos( float4 p ){	return float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*InvScreenDim.xy), p.zw);}float3 CreateEyeRay( float4 p ){	float3 ViewSpaceRay = float3( p.x*TanHalfFOV*ViewAspect, p.y*TanHalfFOV, p.w);      return mul( matView, v ); // or multiply by the ViewInverse in the normal orden}VS_OUTPUT vs_main(VS_INPUT Input){     VS_OUTPUT Output;       Output.Position = mul( Input.Position, matWorldViewProj);     Output.LightPos = mul( float4(0,0,0,1), matWorld ); // I can just get the matrix translation directly, but this translate to the same thing    Output.vPos = ConvertToVPos( Output.Position );    Output.vEyeRay = CreateEyeRay( Output.Position );        return Output;}float4 ps_main(VS_OUTPUT Input) : COLOR{    float3 Depth = tex2Dproj(RT_Depths, Input.vPos);           Input.vEyeRay.xyz /= Input.vEyeRay.z;    float3 PixelPos   = Input.vEyeRay.xyz * Depth + EyePos;    float3 LightDir         = Input.LightPos - PixelPos;    float SqrLightDirLen    = dot(LightDir, LightDir);    return SqrLightDirLen / 100.0;   // show the scaled distance to the light pos}technique Example{    pass p0     {            VertexShader = compile vs_1_1 vs_main();                PixelShader  = compile ps_2_0 ps_main();            }}

The previous code can deal with ScreenQuads or Volume Meshes.
A description of how to get the vPos equation is presented here:
http://www.gamedev.net/community/forums/topic.asp?topic_id=482654

The math for the EyeRay follows the same idea.

Hope it helps
Frank
0

##### Share on other sites
In such a case, can't you simply pass the top-right corner and multiply it by the output position?
// instead of this (assuming normals are the corners)// ray = gl_Normal;// you'd do thisray = topRightFarCorner * vertexScreenPos;
0

##### Share on other sites
Ok to be sure I went through the math of the position calculation again. fpuig's result is right :-) ... nevertheless I thought I post the proof (following the proof of this thread:

http://www.gamedev.net/community/forums/topic.asp?topic_id=482654
)

-----------------------------
Calculating screen space texture coordinates for the 2D projection of a volume is more complicated than for an already transformed full-screen quad. Here is a step-by-step approach on how to achieve this:

1. Transforming position into projection space is done in the vertex shader by multiplying the concatenated World-View-Projection matrix.

2. The Direct3D run-time will now divide those values by Z; stored in the W component. The resulting position is then considered in clipping space, where the x and y value is clipped to the [-1.0, 1.0] range.

xclip = xproj / wproj
yclip = yproj / wproj

3. Then the Direct3D run-time transforms position into viewport space from the value range [-1.0, 1.0] to the range [0.0, ScreenWidth/ScreenHeight].

xviewport = xclipspace * ScreenWidth / 2 + ScreenWidth / 2
yviewport = -yclipspace * ScreenHeight / 2 + ScreenHeight / 2

This can be simplified to:

xviewport = (xclipspace + 1.0) * ScreenWidth / 2
yviewport = (1.0 - yclipspace ) * ScreenHeight / 2

The result represents the position on the screen. The y component need to be inverted because in world / view / projection space it increases in the opposite direction than in screen coordinates.

4. Because the result should be in texture space and not in screen space, the coordinates need to be transformed from clipping space to texture space. In other words from the range [-1.0, 1.0] to the range [0.0, 1.0].

u = (xclipspace + 1.0) * 1 / 2
v = (1.0 - yclipspace ) * 1 / 2

5. Due to the texturing algorithm used by Direct3D, we need to adjust texture coordinates by half a texel:

u = (xclipspace + 1.0) * ½ + ½ * TargetWidth
v = (1.0 - yclipspace ) * ½ + ½ * TargetHeight

Plugging in the x and y clipspace coordinates results from step 2:

u = (xproj / wproj + 1.0) * ½ + ½ * TargetWidth
v = (1.0 - yproj / wproj ) * ½ + ½ * TargetHeight

6. Because the final calculation of this equation should happen in the vertex shader results will be send down through the texture coordinate interpolator registers. Interpolating 1/ wproj is not the same as 1 / interpolated wproj. Therefore the term 1/ wproj needs to be extracted and applied in the pixel shader.

u = 1/ wproj * ((xproj + wproj) * ½ + ½ * TargetWidth * wproj)
v = 1/ wproj * ((wproj - yproj) * ½ + ½ * TargetHeight* wproj)

The vertex shader source code looks like this:

Float4 vPos = float4(0.5 * (float2(p.x + p.w, p.w – p.y) + p.w * inScreenDim.xy), pos.zw)
-----------------------------

Ok now I sit down and think about the rest of the equation ....
0

##### Share on other sites
Hi,

I know this is an old topics but....

You present the best way you have find to create vpos......
But i wandered if you could explain the best way you have find to reonstruct the world position from this vpos and how you store your depth (linear, nonlinear,....).

0

##### Share on other sites
[quote name='fpuig' timestamp='1220865075' post='4309902']
[indent]Quote:
Yeah...interpolating the position of the frustum corners and multiplying with normalized view-space Z is still the fastest way I know of getting a view-space position.
[/indent]

Yes, performing the frustum interpolation is the fastest way and is the way that games like Crysis goes. However that technique is very restricted. As far as I can see it can only be applied to full screen quads as you need to set the corner rays manually. Also diferent viewaspects (that happens often when changing the viewport for diferent Render Targets) or the effect of changing the Field of View angle to get sniper zoom will need to recreate the full screen quat texcoord coordinates in order to math the new camera settings.
In order to perform Volumetric effects more efficiently is needed to render meshes enclosing the simulated volume (e.g. Omni sphere, Spot cone, etc). So computing the Frustum corners for each of the vertex in the volume is not a good idea.

A more general approach is to compute the EyeRay on the vertex shader by using the camera settings like (FOV, ViewAspect). The operations in the Pixel Shaders are the same like in the Frustum Corner's technique. But as rarelly the Vertex Shaders are the bottleneck then having an extra bit of math in order to be more general will not hurt. (Besides is just a couple of operations more)

The next code shows a shader that computes the WorldSpace position of the pixel from a Depth Map. That code was originally used for my omni lighting but I removed the lighting equations so the code gets easier to understand. Instead of needing a Full Screen Quad, you can render an sphere that only have Positions in its Vertex Format.
From the vertex positions, the code computes the vPos and EyeRay. If your target hardware is sm3, you can use the vPos semantic but as I want to stay in sm2 then is easy to compute it.

As the vPos and EyeRay are extracted from the projected Vertex Position, the equations should be divided by Out.Position.w, however as that doesn't work with the raster interpolation, I moved the division into the pixel shader.

//////////////////////////////////////////
// Engine Variables
/////////////////////////////
float4x4 matWorldViewProj : WorldViewProjection;
float4x4 matWorld : World;
float4x4 matView : View;

float3 EyePos : CameraPosition;
float TanHalfFOV : TanHalfFOV;
float ViewAspect : ViewAspect;
float2 InvScreenDim : InvScreenDimensions;

texture tRT_Depths : RT_Depths;
sampler RT_Depths = sampler_state {
texture = <tRT_Depths>;
MIPFILTER = POINT; MINFILTER = POINT; MAGFILTER = POINT;
};

struct VS_INPUT
{
float4 Position : POSITION;
};

struct VS_OUTPUT
{
float4 Position : POSITION;
float3 LightPos : TEXCOORD0;
float4 vPos : TEXCOORD1;
float3 vEyeRay : TEXCOORD2;
};

float4 ConvertToVPos( float4 p )
{
return float4( 0.5*( float2(p.x + p.w, p.w - p.y) + p.w*InvScreenDim.xy), p.zw);
}

float3 CreateEyeRay( float4 p )
{
float3 ViewSpaceRay = float3( p.x*TanHalfFOV*ViewAspect, p.y*TanHalfFOV, p.w);

return mul( matView, v ); // or multiply by the ViewInverse in the normal orden
}

VS_OUTPUT vs_main(VS_INPUT Input)
{
VS_OUTPUT Output;

Output.Position = mul( Input.Position, matWorldViewProj);

Output.LightPos = mul( float4(0,0,0,1), matWorld ); // I can just get the matrix translation directly, but this translate to the same thing
Output.vPos = ConvertToVPos( Output.Position );
Output.vEyeRay = CreateEyeRay( Output.Position );

return Output;
}

float4 ps_main(VS_OUTPUT Input) : COLOR
{
float3 Depth = tex2Dproj(RT_Depths, Input.vPos);
Input.vEyeRay.xyz /= Input.vEyeRay.z;
float3 PixelPos = Input.vEyeRay.xyz * Depth + EyePos;

float3 LightDir = Input.LightPos - PixelPos;
float SqrLightDirLen = dot(LightDir, LightDir);

return SqrLightDirLen / 100.0; // show the scaled distance to the light pos
}

technique Example
{
pass p0
{
}
}

The previous code can deal with ScreenQuads or Volume Meshes.
A description of how to get the vPos equation is presented here:
[url="http://www.gamedev.net/community/forums/topic.asp?topic_id=482654"]http://www.gamedev.n...topic_id=482654[/url]

The math for the EyeRay follows the same idea.

Hope it helps
Frank
[/quote]

Hi.

I followed your instructions, but it doesn't work for me.
My point lighting worked with exponential depth storing (position.z / position.w), but I wanted a linear depth (view space depth).
Fist, I changed my gbuffer code:
[code]
//gbuffer.vsh

float4 wPos = mul(Position, world);
float4 vPos = mul(wPos, view);
Position = mul(vPos, proj);

Depth = length(vPos);

//gbuffer.psh
Depth = DepthIn; //write to the depth target
[/code]

[code]

float4x4 world;
float4x4 view;
float4x4 viewProj;

float tanHalfFOV;
float viewAspect;
float2 invScreenDim;

float4 ConvertToVPos(
in float4 p)
{
return float4(0.5f * (float2(p.x + p.w, p.w - p.y) + p.w * invScreenDim.xy), p.zw);
}

float3 CreateEyeRay(
in float4 p)
{
float3 viewSpaceRay = float3(p.x * tanHalfFOV * viewAspect, p.y * tanHalfFOV, p.w);

return mul(view, viewSpaceRay);
}

void main(
in out float4 Position : POSITION0,
out float4 VPos : TEXCOORD0,
out float3 EyeRay : TEXCOORD1)
{
float4 wpos = mul(Position, world);
Position = mul(wpos, viewProj);

VPos = ConvertToVPos(Position);
EyeRay = CreateEyeRay(Position);
}

sampler2D samplerNormal : register(s2);
sampler2D samplerDepth : register(s3);

float3 eyePosition;

float3 position;
float3 color;
float intensity;

void main(
in float4 VPos : TEXCOORD0,
in float3 EyeRay : TEXCOORD1,

out float4 Output : COLOR0)
{
//textures
float depthText = tex2Dproj(samplerDepth, VPos);
EyeRay.xyz /= EyeRay.z;

//get position
float3 wposition = EyeRay * depthText + eyePosition;

float3 lightVector = position - wposition;
float attenuation = saturate(1.0f - length(lightVector) / radius);

lightVector = normalize(lightVector);
float diffuse = dot(normal, lightVector) * attenuation * intensity;

Output.xyz = diffuse * color;
Output.w = 1.0f;
}
[/code]
But something is wrong... as I move with the camera, the point light changes... can anyone help me?
Thanks

EDIT:
Ofc, I compute normals, etc. it doesn't matter

EDIT 2:
Oh, here, I found the solution, lol:
[url="http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/"]http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/[/url]
0