[XNA] Depth render issue

Started by
1 comment, last by Bimble Bob 13 years, 12 months ago
I'm trying to render the depth of a scene to a RenderTarget2D. It has a format of SurfaceFormat.Single. Here is the shader code I'm using to render out the depth: Vertex shader:

// Transform by world, view, projection
// ...
output.Depth = saturate(output.Position.z / output.Position.w);
return output;

Pixel shader:

float4 colour = float4(input.Depth, 0, 0, 1);
return colour;

The problem is that the output seems flick between what I would perceive as correct and not: This looks correct, darker pixels nearer to the camera, lighter further away But if I rotate the camera ever so slightly I have kept the near and far planes for the depth render as close as I can to increase precision but I'm really not sure what is going on here and it is causing some of the effects, that use the depth render as source data, to fail sporadically. Any insights?
It's not a bug... it's a feature!
Advertisement
You can't do the perspective divide in the vertex shader...the results can't be linearly interpolated since the divide is a nonlinear operation. You need to pass z and w to the pixel shader and do the divide there.

However if you're explicitly storing depth in a render target, you probably don't want to store post-perspective z/w. Its non-linear nature will give you an uneven distribution or precision, and you actually double that problem if you store it in a floating-point buffer. You'd probably be better off storing a linear z value. See this for more details.
Cheers for the tip. I had tried linear depth but I must've been doing it wrong. Got it working now, thanks!
It's not a bug... it's a feature!

This topic is closed to new replies.

Advertisement