Jump to content
  • Advertisement
Sign in to follow this  
nullsquared

Calculating normals from depth map

This topic is 3665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

EDIT: tried to clarify some things a bit. I'm in a bit of a pickle here. I have a deferred renderer, and I want to reuse the normal map within my SSAO. Except, one problem - the normal mapped details are not respected within the depth map - meaning, lighting wants final normal-mapped details, but SSAO only wants the vertex-interpolated normals. The obvious solution is to use a high-quality normal mapped RT for the deferred renderer, and a separate low-quality interpolated normals RT for the SSAO. Except, given that I'm at the maximum of 4 RTs for my MRT already, I had to make this extra RT render in a separate pass. Any ideas? The second thing that came to mind is to calculate these normals using my depth map (since normal mapping is not represented within the depth of the scene). However, how would I do this? I tried reconstructing the 3D positions of the texels to the right/left and top/bottom, and crossing the two directions, but this gave me very ... random results, nothing like normals, more like random noise. I've heard that you can do something like this using the derivative instructions, but how? All help appreciated [smile] [Edited by - agi_shi on June 8, 2008 1:23:42 PM]

Share this post


Link to post
Share on other sites
Advertisement
Quote:
SSAO wants pure vertex normals, lighting wants normal-mapped normals.
This assumption is wrong.

1. SSAO does not need normals to work; just a depth buffer
2. Even if it would, you can always pick the coordinate space you want to store them in. So just pick world- or view-space.

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
Quote:
SSAO wants pure vertex normals, lighting wants normal-mapped normals.
This assumption is wrong.

I'm not assuming, I'm writing code [wink].
Quote:

1. SSAO does not need normals to work; just a depth buffer

Sure, if you want half of your samples to be garbage self-occlusion. Try shading a sphere with SSAO without some kind of normal information; heck, try a plane if you'd like.
Quote:

2. Even if it would, you can always pick the coordinate space you want to store them in. So just pick world- or view-space.

Both are in view-space.

I'm saying that the scene depth map does not respect normal-mapped details, leading to incorrect results if you couple it with the final deferred shading normal map, instead of a normal map consisting of the interpolated vertex normals. I understand if my original post was a bit confusing, I realize I didn't explain myself well enough.

Share this post


Link to post
Share on other sites
What if you use the normal mapped normals and restrict the SSAO sampling to a cone smaller than the full half space?

Share this post


Link to post
Share on other sites
Quote:
Original post by SnotBob
What if you use the normal mapped normals and restrict the SSAO sampling to a cone smaller than the full half space?


My SSAO is already local enough, restricting its range of influence more would turn it into an edge-detection filter [grin]. Using the normal-mapping normals biases the random samples such that they are not consistent, resulting in very obvious gaps within the SSAO.

Share this post


Link to post
Share on other sites
Quote:
Sure, if you want half of your samples to be garbage self-occlusion. Try shading a sphere with SSAO without some kind of normal information; heck, try a plane if you'd like.
This does not sound bad to me :-)

seriously :-) now I understand what you mean. I would just change your SSAO approach that it works with normal mapped normals ... raising the garbage level from 20 to 40 percent is ok ... at the end of the day it is all based on chaos :-) ... so you never know what is going to happen.

Share this post


Link to post
Share on other sites
Just out of curiousity: how many texture fetches are you doing in your SSAO approach when you fetch depth and normal maps?

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
Quote:
Sure, if you want half of your samples to be garbage self-occlusion. Try shading a sphere with SSAO without some kind of normal information; heck, try a plane if you'd like.
This does not sound bad to me :-)

seriously :-) now I understand what you mean. I would just change your SSAO approach that it works with normal mapped normals ... raising the garbage level from 20 to 40 percent is ok ... at the end of the day it is all based on chaos :-) ... so you never know what is going to happen.


Am I missing some sarcastic comment in there ?

It's impossible to reuse the normal-mapping normals since they are not respected within the depth map. That's why I am wondering if anyone has a method to reconstruct the normals from the depth map, rather than tell me to change my SSAO implementation (without implementing some of their own SSAO, it seems).

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
Just out of curiousity: how many texture fetches are you doing in your SSAO approach when you fetch depth and normal maps?


I do exactly 10 texture fetches in my SSAO for PS 2.0, and exactly 18 for PS 3.0.

EDIT: There's also a quick random plane sample, so that'd be exactly 11 fetches for PS 2.0 and exactly 19 fetches for PS 3.0.

Share this post


Link to post
Share on other sites
The ddx() and ddy() functions in HLSL give you dz/dx and dz/dy, respectively. From there you can construct the vectors [ 1, 0, dz/dx ] and [ 0, 1, dz/dy ], and cross them to find the normal. However I've heard that these functions don't behave very well along edges, so you'll need to be careful and see what kind of results you get.

Alternatively, try to pack the interpolated normals in with some other data. Figure out the minimum amount of precision you'd need for decent SSAO quality, and stuff them in there with your per-pixel normals or something else that can afford a few bits.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!