Jump to content

  • Log In with Google      Sign In   
  • Create Account

Position and Normals from depth revisited (artifacts/banding)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 psquare   Members   -  Reputation: 180

Like
0Likes
Like

Posted 07 May 2013 - 02:26 PM

I am trying to recover position and normals from depth stored in render to texture. I store the depth like so in the render to texture:

 

VS:

 

outObjPosViewSpace = mul(worldView, inPos);

 

PS:

 

float depth = length(inObjPosViewSpace); // stored in component w of rgba texture

 

I am trying to recover the normals/depth like so:

 

    float fDepth = tex2D(rtt, screenCoords).w;

    float3 vPosition = eyePosW.xyz + viewRay*fDepth;
 
    float3 vNormal = cross(normalize(ddy(vPosition.xyz)), normalize(ddx(vPosition.xyz)));

    vNormal= normalize(vNormal);

 

The depth as read from the texture, and hence the recovered normals/positions show banding/artifacts. See images (trying to render swimming pool and also trying to recover pool bottom position for caustics). I am unsure why this banding occurs.

 

2013_05_07_161315.png

2013_05_07_161328.png

2013_05_07_161658.png

2013_05_07_161727.png

 

I tried/read several things. For example:

 

http://www.gamedev.net/topic/480031-world-space---view-space-transformation/

http://mynameismjp.wordpress.com/2010/03/


-It does not matter whether I use 16 bit rgba for the rtt or 32 bit rgba

-Filtering mode does not cause any changes.
 

As a side note, I was playing around in Vision Engine's vForge editor, and when I debugged the normals/depth by outputting same values to the shader as mine, I get similar artifacts and banding. I would assume that VE is doing correct math, since their deferred renderer is 'battle tested'.


Edited by psquare, 07 May 2013 - 02:28 PM.


Sponsor:

#2 Dawoodoz   Members   -  Reputation: 331

Like
0Likes
Like

Posted 08 May 2013 - 02:08 AM

I would just render the world space positions and normals directly using multiple rendering outputs since you might want normal mapping. If you need the depth buffer for a simple post effect, you can compute the linear depth if you have the camera projection matrix. http://www.gamedev.net/topic/604984-linear-depth-buffer-for-rendering-shadows/


My open source DirectX 10/11 graphics engine. https://sites.google.com/site/dawoodoz

"My design pattern is the simplest to understand. Everyone else is just too stupid to understand it."


#3 psquare   Members   -  Reputation: 180

Like
0Likes
Like

Posted 08 May 2013 - 03:52 AM

Thanks for your reply. However, my question is more geared towards why this really happens.



#4 Hodgman   Moderators   -  Reputation: 31843

Like
0Likes
Like

Posted 08 May 2013 - 08:33 AM

How do you create your depth texture and render to it?

#5 psquare   Members   -  Reputation: 180

Like
0Likes
Like

Posted 08 May 2013 - 08:49 AM

I use ogre. Its a render to texture render target with rgba components. Each component is a float32. So total of 32x4 = 128 bits precision.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS