Sign in to follow this  

Position and Normals from depth revisited (artifacts/banding)

This topic is 1714 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am trying to recover position and normals from depth stored in render to texture. I store the depth like so in the render to texture:

 

VS:

 

outObjPosViewSpace = mul(worldView, inPos);

 

PS:

 

float depth = length(inObjPosViewSpace); // stored in component w of rgba texture

 

I am trying to recover the normals/depth like so:

 

    float fDepth = tex2D(rtt, screenCoords).w;

    float3 vPosition = eyePosW.xyz + viewRay*fDepth;
 
    float3 vNormal = cross(normalize(ddy(vPosition.xyz)), normalize(ddx(vPosition.xyz)));

    vNormal= normalize(vNormal);

 

The depth as read from the texture, and hence the recovered normals/positions show banding/artifacts. See images (trying to render swimming pool and also trying to recover pool bottom position for caustics). I am unsure why this banding occurs.

 

[url=http://postimg.org/image/9fgxqvqaz/]2013_05_07_161315.png[/url]

[url=http://postimg.org/image/ybadedcyz/]2013_05_07_161328.png[/url]

[url=http://postimg.org/image/wsez9e6ej/]2013_05_07_161658.png[/url]

[url=http://postimg.org/image/lim9eg1d7/]2013_05_07_161727.png[/url]

 

I tried/read several things. For example:

 

http://www.gamedev.net/topic/480031-world-space---view-space-transformation/

http://mynameismjp.wordpress.com/2010/03/


-It does not matter whether I use 16 bit rgba for the rtt or 32 bit rgba

-Filtering mode does not cause any changes.
 

As a side note, I was playing around in Vision Engine's vForge editor, and when I debugged the normals/depth by outputting same values to the shader as mine, I get similar artifacts and banding. I would assume that VE is doing correct math, since their deferred renderer is 'battle tested'.

Edited by psquare

Share this post


Link to post
Share on other sites

I would just render the world space positions and normals directly using multiple rendering outputs since you might want normal mapping. If you need the depth buffer for a simple post effect, you can compute the linear depth if you have the camera projection matrix. http://www.gamedev.net/topic/604984-linear-depth-buffer-for-rendering-shadows/

Share this post


Link to post
Share on other sites

I use ogre. Its a render to texture render target with rgba components. Each component is a float32. So total of 32x4 = 128 bits precision.

Share this post


Link to post
Share on other sites
Sign in to follow this