I am trying to recover position and normals from depth stored in render to texture. I store the depth like so in the render to texture:
VS:
outObjPosViewSpace = mul(worldView, inPos);
PS:
float depth = length(inObjPosViewSpace); // stored in component w of rgba texture
I am trying to recover the normals/depth like so:
float fDepth = tex2D(rtt, screenCoords).w;
float3 vPosition = eyePosW.xyz + viewRay*fDepth;
float3 vNormal = cross(normalize(ddy(vPosition.xyz)), normalize(ddx(vPosition.xyz)));
vNormal= normalize(vNormal);
The depth as read from the texture, and hence the recovered normals/positions show banding/artifacts. See images (trying to render swimming pool and also trying to recover pool bottom position for caustics). I am unsure why this banding occurs.
I tried/read several things. For example:
http://www.gamedev.net/topic/480031-world-space---view-space-transformation/
http://mynameismjp.wordpress.com/2010/03/
-It does not matter whether I use 16 bit rgba for the rtt or 32 bit rgba
-Filtering mode does not cause any changes.
As a side note, I was playing around in Vision Engine's vForge editor, and when I debugged the normals/depth by outputting same values to the shader as mine, I get similar artifacts and banding. I would assume that VE is doing correct math, since their deferred renderer is 'battle tested'.