Jump to content

  • Log In with Google      Sign In   
  • Create Account

psquare

Member Since 12 May 2009
Offline Last Active Jan 30 2014 08:09 AM

Topics I've Started

Position and Normals from depth revisited (artifacts/banding)

07 May 2013 - 02:26 PM

I am trying to recover position and normals from depth stored in render to texture. I store the depth like so in the render to texture:

 

VS:

 

outObjPosViewSpace = mul(worldView, inPos);

 

PS:

 

float depth = length(inObjPosViewSpace); // stored in component w of rgba texture

 

I am trying to recover the normals/depth like so:

 

    float fDepth = tex2D(rtt, screenCoords).w;

    float3 vPosition = eyePosW.xyz + viewRay*fDepth;
 
    float3 vNormal = cross(normalize(ddy(vPosition.xyz)), normalize(ddx(vPosition.xyz)));

    vNormal= normalize(vNormal);

 

The depth as read from the texture, and hence the recovered normals/positions show banding/artifacts. See images (trying to render swimming pool and also trying to recover pool bottom position for caustics). I am unsure why this banding occurs.

 

2013_05_07_161315.png

2013_05_07_161328.png

2013_05_07_161658.png

2013_05_07_161727.png

 

I tried/read several things. For example:

 

http://www.gamedev.net/topic/480031-world-space---view-space-transformation/

http://mynameismjp.wordpress.com/2010/03/


-It does not matter whether I use 16 bit rgba for the rtt or 32 bit rgba

-Filtering mode does not cause any changes.
 

As a side note, I was playing around in Vision Engine's vForge editor, and when I debugged the normals/depth by outputting same values to the shader as mine, I get similar artifacts and banding. I would assume that VE is doing correct math, since their deferred renderer is 'battle tested'.


PARTNERS