Jump to content
  • Advertisement
Sign in to follow this  
simotix

Depth Value reconstruction, why?

This topic is 2702 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I implemented a DirectX11 Deferred Renderer, but it seems I may not have done it the recommended way (for my depth value).

Everything I read points to http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/ as a way to reconstruct the depth position, but I do not see what the main advantage is over the way I am doing it. Currently when I render my GBuffer in my Vertex Shader I calculate my depth as

output.Position = mul( input.Position, World );
output.Position = mul( output.Position, View );
output.Position = mul( output.Position, Projection );
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;

then in my Pixel Shader for rendering my g buffer I will do

output.Depth = input.Depth.x / input.Depth.y; (Where output.Depth is a float4 to a render target).

Then when I go to use my depth buffer, I will do the following in a lighting shader, such as a point light

float depthVal = depthMap.Sample( sampPointClamp, texCoord ).r;
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
position = mul(position, InvertViewProjection);
position /= position.w;

There seems to be a lot of resources on this technique, but I am just now sure on what the main advantage is, could someone please explain what the main advantage is?

Share this post


Link to post
Share on other sites
Advertisement
I go into detail in that article, but the main focus points are reducing arithmetic operations and precision. The math reduction is just some algebra and trig, which I won't go into here. With regards to precision, z/w is *very* non-linear. Consequently it has a non-linear distribution of precision (which can be made worse if stored in a floating point texture), which can make a linear depth metric more desirable. On top of that you can do the reconstruction with less math with a linear depth value, so if you're going to manually store depth in a render target then there's really no good reason to use z/w. Obviously sampling the depth buffer is desirable in a lot of scenarios, in which case you can't choose how you store depth. But in that case you can still do a bit better than doing a full reverse-projection. In the long run the small amount of extra math might be totally inconsequential...it depends on your needs and your target hardware. On older/weaker hardware or consoles every shader cycle can count.

Share this post


Link to post
Share on other sites
I have been reading your articles and looking at the code in PRTest and I have a few questions.

1) In http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/ you mention "You can figure out this direction vector by using the screen-space XY position of the pixel to lerp between the positions of the frustum corners, subtracting the camera position, and normalizing", why normali

2) What is "positionOS" and "positionCS", is it, "Object Space" and "Camera Space"?

3) In your PRTest example, which pixel shader code should I be looking at? I am confused at the different reconstruct types, but I figured they were like that because of the different render target types for the depth buffer. I am using a DXGI_FORMAT_R24G8_TYPELESS

Share this post


Link to post
Share on other sites

I have been reading your articles and looking at the code in PRTest and I have a few questions.

1) In http://mynameismjp.w...n-from-depth-3/ you mention "You can figure out this direction vector by using the screen-space XY position of the pixel to lerp between the positions of the frustum corners, subtracting the camera position, and normalizing", why normali

2) What is "positionOS" and "positionCS", is it, "Object Space" and "Camera Space"?

3) In your PRTest example, which pixel shader code should I be looking at? I am confused at the different reconstruct types, but I figured they were like that because of the different render target types for the depth buffer. I am using a DXGI_FORMAT_R24G8_TYPELESS


1.) You want to normalize so that you have a normalized direction vector. Basically you're storing the magnitude in your render target, and combining it with the direction and camera position to get the actual position.

2.) "positionOS" is object-space position, and "positionCS" is clip-space position (position after being transformed by world * view * projection, before homogeneous divide-by-w)

3.) If you're sampling a depth buffer directly, then I think the PS function is called "PSReconstructPerspective" or something similar to that. It will be the one that uses the projectionA and projectionB constants to convert the depth buffer value into linear Z.

Share this post


Link to post
Share on other sites

[quote name='simotix' timestamp='1295319204' post='4760510']
I have been reading your articles and looking at the code in PRTest and I have a few questions.

1) In http://mynameismjp.w...n-from-depth-3/ you mention "You can figure out this direction vector by using the screen-space XY position of the pixel to lerp between the positions of the frustum corners, subtracting the camera position, and normalizing", why normali

2) What is "positionOS" and "positionCS", is it, "Object Space" and "Camera Space"?

3) In your PRTest example, which pixel shader code should I be looking at? I am confused at the different reconstruct types, but I figured they were like that because of the different render target types for the depth buffer. I am using a DXGI_FORMAT_R24G8_TYPELESS


1.) You want to normalize so that you have a normalized direction vector. Basically you're storing the magnitude in your render target, and combining it with the direction and camera position to get the actual position.

2.) "positionOS" is object-space position, and "positionCS" is clip-space position (position after being transformed by world * view * projection, before homogeneous divide-by-w)

3.) If you're sampling a depth buffer directly, then I think the PS function is called "PSReconstructPerspective" or something similar to that. It will be the one that uses the projectionA and projectionB constants to convert the depth buffer value into linear Z.
[/quote]


I am still looking into this, I am reading nearly everything I can find on it to make sure I fully understand the topic. I do have a few more questions.


In the pixel shader of the G-Buffer pass, calculate the distance from the camera to the surface being shaded and write it out to the depth texture
[/quote]
As you said in your Depth 3 article, if doing this is view space then you don't have to subtract the camera position, because it would be 0, 0, 0. So is this why the calculation for the Depth is just "PositionVS.z / FarClipDistance"?

What do you end up using as your texture coordinate to sample your normal maps? Is it the same texture coordinate you calculate for sampling the depth buffer?

In your code sample, in your vertex shader you label the position as "PositionCS", but then in your shader you don't have a "PositionCS", but rather a "PositionSS", was there any reason for the name change?

Share this post


Link to post
Share on other sites
I did a side by side comparison of what I had working (traditional way) and with trying depth value reconstruction. When reading the initial depth value's they were the same, so that is not the issue (which means the texture coordinate is also not the issue).

That leaves it to be something very limited. I am not sure what it could be though, while looking at the example that MJP has on his website, mine looks nearly identical.

In my PointLight vertex shader, I will calculate the clip space and vertex space positions by doing this

float4 inputPosition = float4(input.Position, 1.0f);

output.PositionCS = mul( inputPosition, World );
output.PositionCS = mul( output.PositionCS, View );

output.PositionVS = output.PositionCS.xyz;

output.PositionCS = mul( output.PositionCS, Projection );

then in my pixel shader, I will reconstruct the depth like this

float3 viewRay = float3(input.PositionVS.xy / input.PositionVS.z, 1.0f);

float ProjectionA = FarClip / (FarClip - NearClip);
float ProjectionB = -(FarClip * NearClip) / (FarClip - NearClip);

float linearDepth = ProjectionB / (depth - ProjectionA);
float3 positionVS = viewRay * linearDepth;

I know that ProjectionA/ProjectionB can be moved to outside the shader, I just have them there for the time being. Does anyone have any suggestions to why my depth value is wrong?

Share this post


Link to post
Share on other sites
In order to help narrow the question down for better help, I will start off with this.

The depth value I end up reading is the same if I do it the traditional way and if I do it with the reconstruction. By traditional, I mean doing it this way


VS Shader
output.Position = mul( input.Position, World );
output.Position = mul( output.Position, View );
output.Position = mul( output.Position, Projection );
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;

PS Shader
output.Depth = input.Depth.x / input.Depth.y;


With the reconstruction I just do


VS Shader
output.PositionCS = mul( input.Position, World );
output.PositionCS = mul( output.PositionCS, View );

output.PositionVS = input.Position;

PS Shader
output.Depth = input.PositionVS.z / farClip;


Should I be getting the same value for doing it the same way for both?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!