I'm trying to render the depth of a scene to a RenderTarget2D. It has a format of SurfaceFormat.Single.
Here is the shader code I'm using to render out the depth:
Vertex shader:
// Transform by world, view, projection
// ...
output.Depth = saturate(output.Position.z / output.Position.w);
return output;
Pixel shader:
float4 colour = float4(input.Depth, 0, 0, 1);
return colour;
The problem is that the output seems flick between what I would perceive as correct and not:
This looks correct, darker pixels nearer to the camera, lighter further away
But if I rotate the camera ever so slightly
I have kept the near and far planes for the depth render as close as I can to increase precision but I'm really not sure what is going on here and it is causing some of the effects, that use the depth render as source data, to fail sporadically.
Any insights?
It's not a bug... it's a feature!