Sign in to follow this  

[XNA] Depth render issue

Recommended Posts

Dom_152    476
I'm trying to render the depth of a scene to a RenderTarget2D. It has a format of SurfaceFormat.Single. Here is the shader code I'm using to render out the depth: Vertex shader:
// Transform by world, view, projection
// ...
output.Depth = saturate(output.Position.z / output.Position.w);
return output;

Pixel shader:
float4 colour = float4(input.Depth, 0, 0, 1);
return colour;

The problem is that the output seems flick between what I would perceive as correct and not: This looks correct, darker pixels nearer to the camera, lighter further away But if I rotate the camera ever so slightly I have kept the near and far planes for the depth render as close as I can to increase precision but I'm really not sure what is going on here and it is causing some of the effects, that use the depth render as source data, to fail sporadically. Any insights?

Share this post

Link to post
Share on other sites
MJP    19754
You can't do the perspective divide in the vertex shader...the results can't be linearly interpolated since the divide is a nonlinear operation. You need to pass z and w to the pixel shader and do the divide there.

However if you're explicitly storing depth in a render target, you probably don't want to store post-perspective z/w. Its non-linear nature will give you an uneven distribution or precision, and you actually double that problem if you store it in a floating-point buffer. You'd probably be better off storing a linear z value. See this for more details.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this