What physical phenomenon does ambient occlusion approximate?

Started by
8 comments, last by LeGreg 9 years, 1 month ago

I understand that ambient occlusion is a simplification of, or a partial solution to, global illumination. How this interacts with physically based rendering is less clear, however, as at face value ambient occlusion is creating light from nothing.

The domain for ambient occlusion at a given point is the hemisphere around the tangent plane to the surface at that point, and ambient occlusion is considered to be indirect lighting. However, if you narrow down that domain to the silhouette of an area light, you get direct lighting, which is already computed and occluded using a shadow map.

This gets confusing, since for sunlight, BOTH techniques are commonly used. How do you account for the energy transfer from direct sunlight to indirect sunlight (ambient light)? I'm assuming it's related somehow to the atmospheric scattering of a scene, because as a scene gets more overcast and less sunny, direct lighting is reduced while indirect lighting is increased.

So this means that lots of different scattering phenomena can contribute to ambient lighting: air, clouds, fog, water (for an underwater scene), and so on. How do you deal with all of this in a way that conserves light, or at the very least does not create extra light (losing light is more acceptable, in my humble opinion)?

"So there you have it, ladies and gentlemen: the only API I’ve ever used that requires both elevated privileges and a dedicated user thread just to copy a block of structures from the kernel to the user." - Casey Muratori

boreal.aggydaggy.com

Advertisement

Ambient Occlusion is basically a shadow map for a sky/dome light. Using it for any other purpose is a clever hack, not an application of a physical concept.

You shouldn't really be using AO to shadow sunlight, unless your artists deliberately want to make that kind of non-physical choice for style reasons.

I didn't mean to imply that I wanted to replace direct sunlight rendering with ambient occlusion. I'm simply wondering how to keep the contribution physically correct and variable depending on the environment.

"So there you have it, ladies and gentlemen: the only API I’ve ever used that requires both elevated privileges and a dedicated user thread just to copy a block of structures from the kernel to the user." - Casey Muratori

boreal.aggydaggy.com


I'm simply wondering how to keep the contribution physically correct and variable depending on the environment.

I'm not sure it's realistic to make things like the sky brightness physically correct. Clouds might give off more light than a clear blue sky, but it's going to depend on so many factors. Probably more practical to just have some artistic control that is modulated by the overall sunlight.

For the "ground" part of your sky dome, the amount of light given off can be calculated assuming you know the approximate albedo of the ground and the amount of light coming from the sun or top half of the sky dome.

I did some experiments to try to improve my ambient lighting which may be of interest:

https://mtnphil.wordpress.com/2014/05/03/global-illumination-improving-my-ambient-light-shader/

The short version: AO approximates the occlusion of direct lighting from a distant hemispherical light source

The long version:

AO essentially approximates the shadowing/visibility that you would use for computing diffuse lighting from a distant light source that covers the entire hemisphere that's visible for a surface (hence Hodgman's comment about sky lighting). It actually works fairly well for the case where you have an infinitely distant light source that has a constant incoming lighting for every direction on the hemisphere. To understand why, let's look at the integral for computing AO:

AO = IntegrateAboutHemisphere(Visibility(L) * (N dot L)) / Pi

where "Visibility(L)" is a visibility term that returns 0.0 if a ray intersects geometry in the direction of 'L', and 1.0 otherwise. Note that the "1 / Pi" bit is there because the integral of the cosine term (the N dot L part) comes out to Pi, and so we must divide by Pi to get a result that's of the range [0, 1].

Now let's look at the integral for computing direct diffuse lighting (no indirect bounce) from an infinitely distant light source (like the sky), assuming a diffuse albedo of 1:

Diffuse = IntegrateAboutHemisphere(Visibility(L) * Lighting(L) * (N dot L)) / Pi

where "Lighting(L)" is the sky lighting in that direction.

When we use AO for sky light, we typically do the equivalent of this:

Diffuse = AO * IntegrateAboutHemisphere(Lighting(L) * (N dot L)) / Pi,

which if we substitute in the AO equation gives us this:

Diffuse = (IntegrateAboutHemisphere(Visibility(L) * (N dot L)) / Pi) * (IntegrateAboutHemisphere(Lighting(L) * (N dot L)) / Pi)

Unfortunately this is not the same integral that we used earlier for computing direct lighting, since you generally can't just pull out terms from an integral like that and get the correct result. The only time we can do this is if the Lighting term is constant for the entire hemisphere, in which case we can pull it out like this:

Diffuse = Lighting * IntegrateAboutHemisphere(Visibility(L) * (N dot L)) / Pi

Which means that we can plug in the AO like so:

Diffuse = Lighting * AO

and it works! Unfortunately, the case we've constructed here isn't a very realistic one for two reasons:

  1. Very rarely is the incoming lighting constant in all directions surrounding a surface, unless perhaps if you live in The Matrix. Even for the case of a skydome you have some spots that are brighter with different hue due to sun scattering, or cloud coverage. For such cases you need to perform Visibility * Lighting inside of the integral in order to get the correct direct lighting. However, the approximation can still be fairly close to the ground truth as long as the lighting is pretty low frequency (in other words, the intensity/hue doesn't rapidly change from one direction to another). For high-frequency lighting, the approximation will be pretty poor. The absolute worse case is an infinitely small point light, which should produce perfectly hard shadows.
  2. In reality, indirect lighting will be bouncing off of the geometry surrounding the surface. For AO we basically assume that all of the surrounding geo is totally black and has no light reflecting off of it, which is of course never the case. When considering purely emissive light sources it's okay to have a visibility term like we had in our integral, however you then need to have a second integral that integrates over the hemisphere and computes indirect lighting from all other visible surfaces.

Reason #1 is why you see the general advice to only apply it to "ambient" terms, since games often partition lighting into low-frequency indirect lighting (often computed offline) that's combined with high-frequency direct lighting from analytical light sources. The "soft" AO occlusion simply doesn't look right when applied to small light sources, since the shadows should be "harder" for those cases. You also don't want to "double darken" your lighting for cases where the dynamic lights also have a dynamic shadowing term from shadow maps.

As for #2, that's tougher. Straightforward AO will always over-darken compared to ground truth, since it doesn't account for bounce lighting. It's possible to do a PRT-style computation where you try to precompute how much light bounces off of neighboring surfaces, but that can exacerbate issues caused by non-constant lighting about the hemisphere, and also requires having access to the material properties of the surfaces. It's also typically not possible to this very well for real-time AO techniques (like SSAO), and so you generally don't see anyone doing that. Instead it's more common to have hacks like only considering close-by surfaces as occluders, or choosing a fixed AO color to fake bounced lighting.

From some thinking and prototyping, I think what I want to do is at least separate the indirect sunlight portion of ambient lighting from surface emissivity. This should allow me to more easily physically define the energy transfer between direct sunlight and indirect sunlight, i.e. sunlight that has been scattered in the atmosphere.

What I'm stuck on right now is a way of determining sky occlusion. Since I should be able to represent my world quite trivially with a signed distance field, I figured that was the best place to start. I first tried ray marching, which gave good results but required too many samples in the worst case (though perhaps dynamic branching will be cheap enough to offset this with early-out tests?) I thought about trying to figure out a heuristic-based approach, since at the end of the day I don't require a contact solution, only whether or not there is an intersection. However, I can't think of a way to do this without a lot of incorrect occlusion samples. In practice this might turn out to be acceptable, but I won't know until I implement it.

Am I barking up the wrong tree here or am I just missing some key technique?

"So there you have it, ladies and gentlemen: the only API I’ve ever used that requires both elevated privileges and a dedicated user thread just to copy a block of structures from the kernel to the user." - Casey Muratori

boreal.aggydaggy.com

How this interacts with physically based rendering is less clear

ambient occlusion ist not physically correct, every rendering that uses it is breaking all the previous effort to be energy conserving.

It is also calculated in a different way, not just tool to tool, but also two artist sitting next to each other will create a different occlusion map. Imagin a big building and the definition that AO captures the direct lighting of the skydom, that implies that most areas inside the building would have plain black AO.
That would look ok'ish for a game like SimCity, but what would be the point in creating 99% black maps in indoor games? so artist tweak the outcome to be maybe just occlusion from close by objects. some save the actually occlusion percentage within a specific radius, others also take the distances to the occlusion into account.

So all the idea is just to make things look 'interesting'. it's not to make anything correct. if you had a couch on the floor in a room and you'd calculate AO, everything between couch and floor would turn out to be black mostly, but what if the only light source of the room is between floor and couch? the falloff of the light would look kinda inverse.

So there is really nothing physically correct, physically correct you would not darken corners everywhere http://nothings.org/gamedev/ssao/

The reason it was done in Crysis was because the alternative was to have a plain ambient color. Everything in sun looks incredibly nice shaded, plants have even back side lighting and then you step into shadowed areas at it would look like a quake level in the Radiant editor. switch it off and you'll see what I mean ;)


the right way to do sky lighting would be to actually do the whole light distribution (not just direct sky light, but all the bounces). baking that into a texture would give you at least the radiosity solution. (the rooms in the building would all get lit up to some degree). One step further is to save directional informations. e.g. directional radiosity maps like halflife 2 did back then http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf
One step further would be to really save the hemisphere lighting per sample (could be vertex or texel of your map), that could be accomplished using spherical harmonics.
And yet another step further would be precomputed radiance transfer, that tries to save the amount of light per sample based on the direction of light, that way you could move the sun or rotate the objects. there are samples of PRT in older directX SDKs and you'll find quite some examples on youtube.

with physically based shading you'd need to integrate over the sample-hemisphere with your brdf per pixel (as, except for perfect lambert, brdfs are always view dependent). that would be a lot of data to push and also a lot of offline computation, but it would probably look damn sexy.

I feel like it's a little unfair to just say that AO is fundamentally broken or not useful, even in the context of physically-based rendering. PBR in general is still full of approximations and weak assumptions, but the techniques are still useful and can produce more "correct" results compared to alternatives since they're still at least attempting to model real-world physical phenomena. A particularly relevent example is the geometry/shadowing term in microfacet BRDFs: it's been pointed out many times how they make the (incorrect) assumption that neighboring microfacets only occlude light instead of having light bounce off them, and yet it's still better than having no occlusion term at all. It's also a much more feasible solution compared to something crazy like path tracing at a microfacet level, so you can also just look at it as a reasonable trade-off given performance constraints. I feel like you can look at AO the same way: sure it makes assumptions that are usually wrong, but if used judiciously it could absolutely give you better results as opposed to not having it. For instance, representing occlusion from dynamic objects and characters. Sure you'd get better results if you simulated local light bouncing off those objects, but it's a heck of a lot cheaper to just try to capture their occlusion in AO form and can still be a lot better than having no occlusion whatsoever.

Aye, ideally you'd path trace down to a microfacet level with the same number of samples as actual photons in the scene, with spectral response the whole way. But not even Hollywood does that. Approximations are needed, especially the more realtime you want to go.

There are plenty of hacks for ambient occlusion that are employed to make it closer to correct, bent normals are popular, and Crytek extend that to shadowing specular by the same amount as well: http://advances.realtimerendering.com/s2014/index.html#_RENDERING_TECHNIQUES_IN (The Ryse slides). Besides, GI appears to be a O(n) squared problem, so the number of bounces just grows your problem exponentially, and 3 bounces is considered minimum for a general scene to converge towards something appropriate. Considering no game has yet shipped with even a single proper short ranged bounce in realtime, I'd say approximations will be with us for the rest of this generation.

To show what it simulates here's a first rendering with a form of ambient occlusion :
50pc-diffuse-rasterizer.jpg

The ambient occlusion obscures an ambient cubemap (displayed color = aoterm * ambientcubemapcolor).

Then here's a simulation of the same scene using global illumination (path tracing) :
50pc-diffuse-path-tracing.jpg

The path tracing traces all possible rays between the eye/camera and the
light source (the primary light source in this scene are the Sun and the
Environment map which encompasses the sky but also the ground which
reflect a bit of light - of course in physical terms the sky and ground
are secondary and N-ary light sources, but the simulation doesn't
differentiate).

The rays are bouncing around, and there's a bit of inter-reflection in that second image but
overall the two images are pretty close. Not "exactly close" but you get
the idea.

To obtain the first image : sun contributes to direct lighting, is unaffected by ambient occlusion but has somewhat "hard shadows" (sun occupies a non zero area in the sky so in theory has a penumbra..). The rest (sky + ground) contributes to the ambient cubemap that is an average using the Lambert Diffuse model (a simplification) and that's multiplied by the AO term. The AO term is only a function of the surrounding geometry. Both Sun and rest are then added together to get the final color value.

as at face value ambient occlusion is creating light from nothing.

That's actually the opposite. Ambient *occlusion* as its name indicates occludes light source and so this effect substracts lights from the scene. It's an approximation of the real life occlusion of secondary light sources as shown in the two images above.

Maybe you meant : how to compute the ambient term (the ambient term adds light to a scene and is then multiplied by the AO term) ? Well it varies from game to game. Here I just did average the env map and stored it in a cube map. If there's fog and so on, it would also affect the env map to some extent so would naturally be factored in the resulting cube map. Some games will have light probes either baked at design time or calculated on the fly (if the light conditions are dynamic).

Early games did not have an ambient cubemap and instead had a simple ambient color (constant in all directions). Same idea except that you average everything to a single term and you lose some "detail" in the scene because the ambient light is no longer directional. They didn't use have to use light probes but the color would most likely be controlled directly by artists.

lambertian-model-direct-shadow-poster.pn

Here's a similar scene with a constant ambient term and no ambient occlusion.

This topic is closed to new replies.

Advertisement