# Ambient Occlusion from Depth Map?

This topic is 2160 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello, does anyone here have any suggestions on how to do Ambient Occlusion from a depth map? I have been trying for a very long time to figure out an equation, but have not been able to. Currently I am using:

Occlusion = 255 + ((LeftSample + RightSample + UpSample + DownSample) / 4) - CurrentPixel

It does this for every pixel on the screen being displayed. I am not currently looking for a language specific answer more of just some sort of equation I could use on every pixel of a depth map to get the ambient occlusion. Perhaps someone could just get me headed in the right direction?

##### Share on other sites

What you are trying to do is called screen space ambient occlusion (SSAO). There's a lot of resources out there for this, and more advanced variants of it. Here's a good starting point: http://www.iquilezles.org/www/articles/ssao/ssao.htm

##### Share on other sites

Thank you both those are pretty good answers. Do you think you could fine one though for SSAO that just gives a simple equation for getting occlusion from an image? I made a program that loads a depth map image and runs an algorythm on each pixel. What should the equation per pixel be?

##### Share on other sites

Thank you both those are pretty good answers. Do you think you could fine one though for SSAO that just gives a simple equation for getting occlusion from an image? I made a program that loads a depth map image and runs an algorythm on each pixel. What should the equation per pixel be?

The reason there's no simple (at least, simpler than what's already been posted) equation is that the "equation per pixel" needs to, at a minimum, take into account a bunch of neighboring pixels as it needs to know not just the absolute depth but the depth relative to some localized area.

There's a perfectly good reason for this: it's called ambient occlusion precisely because you need to figure out how much light is prevented from reaching the target pixel (that is, how much is occluded); there's simply no way to know what light will be occluded unless you have some information about the neighboring geometry, as it's the neighboring geometry that's actually doing the occlusion.

Any additional complexity you perceive in the methods posted is probably a result of one or two things:

a) naively sampling all neighboring pixels at a wide enough radius is too slow for real-time performance, so the algorithms need some way to determine which pixels to sample to get a result that looks appealing. Usually this means picking some fairly arbitrary sample kernel and then changing it for each pixel so that any sampling error appears as high frequency noise.

b) to get something that even sort of looks like ambient occlusion, it's extremely valuable to think of it in terms of normals as well as just depth information. It takes a bit of additional calculation (derivatives) to reconstruct normals from just a depth map.

##### Share on other sites

Thanks! That made it a little more clear to me and the equation I am currently using does take into account the neighboring pixels of the depth map image. Do you know a way to do ambient occlusion with a normal buffer? Some more information from you would be nice.

##### Share on other sites

Thanks! That made it a little more clear to me and the equation I am currently using does take into account the neighboring pixels of the depth map image. Do you know a way to do ambient occlusion with a normal buffer? Some more information from you would be nice.

This is a pretty solid article, but I'm not sure how much overlap there is with the two articles already posted as I only skimmed them.

I've found that the most straightforward way to think about it is that each occluding pixel works roughly like a "negative" point light that removes lighting instead of adding it. The key point (and I'm quoting directly from the article I linked above) is then:

• Distance “d” to the occludee.
• Angle between the occludee´s normal “N” and the vector between occluder and occludee “V”.

With these two factors in mind, a simple formula to calculate occlusion is: Occlusion = max( 0.0, dot( N, V) ) * ( 1.0 / ( 1.0 + d ) )

The nice thing is that, if you also want to fake some indirect lighting in screen space, you can use a similar idea except treat the occluders as actual point lights, although this is most useful in a deferred rendering situation (intuitively it requires at minimum a color buffer in addition to normal and depth).

Edited by cowsarenotevil

##### Share on other sites

Thank you! This is what I have come up with so far from the information from all of you.

Input would be nice. Thanks everyone!

##### Share on other sites

It's a bit hard to tell what I'm looking at; is that just the ambient occlusion multiplied with an unshaded white material? If so it looks like you're probably need to tweak some of the values you're using for attenuation, etc. so that it's a bit less extreme. When the contrast is that high it'll make sampling errors/noise more visible. You might also want to decrease the sampling radius a bit so that the sampling artifacts are less pronounced (I'm assuming you're already adjusting the sample radius with distance?).

Additionally, for the sake of physical plausibility, it should roughly be never the case that the total occlusion for a pixel should go to completely black; remember, the assumption is that light is coming from all directions, so the only way you'd really get solid black is if light were occluded from all directions, and the fact that this is happening in screen space basically prevents this from happening, as something blocked from all directions also wouldn't be visible to the camera. The only "exception" would be if the occluders were behind the camera, but then you wouldn't be taking them into account anyway.

In general, because SSAO makes so many assumptions (which are not usually true in practice), the less "obvious" the effect is, the better.

• 13
• 18
• 29
• 11
• 20