View-space position reconstruction from DepthBuffer

Started by
0 comments, last by caballero 12 years, 9 months ago
Hi all,

Currently I'm trying to get viewspace reconstruction from the Hardware depth buffer working.
I'm making use of the snippet by MJP from http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

In my GBuffer MRT Pass shader I do:


struct PS_INPUT
{
float4 Position : POSITION0;
float4 vColor : COLOR0;
float2 vTexUV : TEXCOORD0;
float3 Normal : TEXCOORD1;
float DofBlurFactor : COLOR1;
float2 DepthZW : TEXCOORD2;
};

struct PS_MRT_OUTPUT
{
float4 Color : SV_Target0;
float4 NormalRGB_DofBlurA : SV_Target1;
float Depth : SV_Depth;
};

PS_INPUT VS( VS_INPUT input, uniform bool bSpecular )
{
PS_INPUT output;
...
...
output.Position = mul( input.Position, WorldViewProjection );
output.DepthZW.xy = output.Position.zw;
...
...
return output;
}

PS_MRT_OUTPUT PS( PS_INPUT input, uniform bool bTexture )
{
PS_MRT_OUTPUT output = (PS_MRT_OUTPUT)0;
...
...
output.Depth = input.DepthZW.x / input.DepthZW.y;
...
...
return output;
}


To reconstruct the viewspace position in my fullscreen post processing shader I use the following function (which mostly consists of the code found in MJPs article)...

float3 getPosition(in float2 uv)
{
// Get the depth value for this pixel
float z = SampleDepthBuffer(uv);

float x = uv.x * 2 - 1;
float y = (1 - uv.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);

// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, InverseProjection);

// Divide by w to get the view-space position
vPositionVS.z = -vPositionVS.z;
return vPositionVS.xyz / vPositionVS.w;
}


Interestingly I have to invert the Z component of the reconstructed position, but it should make sense since the positive Z-Axis in the application points into the screen, whereas it usually points out of the screen for an default righthanded coordinate system.
Still the reconstructed viewspace position as shown in the below screenshot is wrong...

0sOR0.png

... you can see that the position reconstruction is at least somehow working, but it should look more like this ...

nearlycorrect.jpg

The biggest visual difference is that the background isn't black in my version, meaning that there might be something wrong with the depth ??
I'd be thankful for any hints if someone has an idea what could be wrong.

[edit]: it looks like for background pixels the term "return vPositionVS.xyz / vPositionVS.w;" performs a division by zero, which is no good obviously, but why does this happen ?

Many thanks,
Regards
Advertisement
Hi Folks,
we finally get the view-position reconstruction to work. The solution was obvious but doesn't look that nice in the shader code.


The shader code in the G-Buffer pass we removed the following lines,



output.DepthZW.xy = output.Position.zw; // in the vertex shader


output.Depth = input.DepthZW.x / input.DepthZW.y; // in the pixel shader

because the hardware depth buffer store this value for us, so we don't need to recalculate them.


In the post-process fullscreen pass we changed the following:

float3 getPosition(in float2 uv)
{
// Get the depth value for this pixel
float z = SampleDepthBuffer(uv);


float x = uv.x * 2 - 1;
float y = (1 - uv.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);


// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, InverseProjection);


// Divide by w to get the view-space position
vPositionVS.z = -vPositionVS.z;


// solution - discard texel values at depth = 1
if(z == 1.0)
return float3(0, 0, 0);


return vPositionVS.xyz / vPositionVS.w;
}



Before rendering the scene we clear the depth stencil buffer with 1. If the scene is not full of objects some depth values will be 1, painting the view-space position reconstruction at this position too.




Discarding these values, gives us the desired but static background. Another way would be to change the depth comparison function but imho both options doesn't look nice.




How do you guys (would) handle that problem, when using the hardware depth buffer for viewspace position reconstruction?
Hi Folks,
we finally get the view-position reconstruction to work. The solution was obvious but doesn't look that nice in the shader code.

The shader code in the G-Buffer pass we removed the following lines,


output.DepthZW.xy = output.Position.zw; // in the vertex shader

output.Depth = input.DepthZW.x / input.DepthZW.y; // in the pixel shader

because the hardware depth buffer store this value for us, so we don't need to recalculate them.

In the post-process fullscreen pass we changed the following:

float3 getPosition(in float2 uv)
{
// Get the depth value for this pixel
float z = SampleDepthBuffer(uv);

float x = uv.x * 2 - 1;
float y = (1 - uv.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);

// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, InverseProjection);

// Divide by w to get the view-space position
vPositionVS.z = -vPositionVS.z;

// solution - discard texel values at depth = 1
if(z == 1.0)
return float3(0, 0, 0);

return vPositionVS.xyz / vPositionVS.w;
}


Before rendering the scene we clear the depth stencil buffer with 1. If the scene is not full of objects some depth values will be 1, painting the view-space position reconstruction at this position too.


Discarding these values, gives us the desired but static background. Another way would be to change the depth comparison function but imho both options doesn't look nice.


How do you guys (would) handle that problem, when using the hardware depth buffer for viewspace position reconstruction?

This topic is closed to new replies.

Advertisement