# HBAO/GTAO casting too much occlusion from thin objects

## Recommended Posts

Hi,

I am currently implementing the ambient occlusion technique described in the paper & presentation "Practical Realtime Strategies for Accurate Indirect Occlusion" (aka Ground-Truth Ambient Occlusion)

The results so far are pretty good, I have found one edge case though where I think the technique breaks down and I'm wondering if there is a solution that can solve the problem or if there is no good way to solve this with depth-buffers alone.

The problem occurs with thin elongated objects, they cast way too much occlusion especially when viewed from a flat angle.

In the below images I put a thin pole at a ~20° angle that sticks into the floor plane, for reference there also is a big cube that touches the ground plane, as well as a very thin wall (with similar thickness as the pole)

In the first two images you can see that there is too strong of a occlusion where the pole comes close to the ground plane when viewed from a flat angle.

When viewed from above (the second pair of images) the occlusion looks more reasonable (much weaker)

As far as I understand it, the problem is caused because the HBAO/GTAO algorithm searches for the horizon angle, but it has no knowledge of discontinuities in the depth-buffer. Therefore in the below case for the fragments of the floor that are below (in Y) AND behind (in Z) of fragments of the thin pole object, will find the steepest horizon angle on the first fragment that lies on the pole object. Therefore the visible part of the hemisphere [V1] is then only defined by the horizon angle that was found on the ground plane [h1], and the second angle points towards the front-most fragment of the pole-object [h2].

That means that the total determined visibility (disocclusion) over the hemisphere is missing the entire visible back half [V2] ... spanning between the floor behind the pole [h4] and the back-tangent horizon onto the pole [h3].

At least that's what I think is the cause for the problem. Is my thinking correct so far ? and if yes, are there ways to fix this issue with the GTAO/HBAO approach ?

I already tried out some ideas, that I have come up with, utilizing a back-face Z-Buffer and full on depth-peeling with multiple-layers, but nothing so far has solved the problem without introducing problems for the regular AO cases.

Any hints would be much appreciated, Thanks.

Scematic

Side-View RGB

Side-View AO (the thin pole casts way too much occlusion onto the ground plane)

Top-View RGB

Top-View AO (from the top perspective the problem is not noticable as much)

Edited by cgWolf

##### Share on other sites

Yeah I've seen some research with layered depth (but can't think of the paper names....)

A last resort would be to draw these problematic objects after computing AO...

Or writing some kind of color/stencil value per pixel that says whether that pixel belongs to a problematic object. Then when computing horizons, if any 'problematic-flagged' depth values are used, artificially lighten the resulting AO value.

##### Share on other sites

Importance layered depth is neat. FP16 bit depth, because you don't need ranges beyond your SSAO anyway, then just do another depth layer, K-buffer or whatever, in tiles around the edges of each object. Results can be pretty nifty and much more temporally stable, but that doesn't mean it's not costly still.

The original GTAO paper actually has a separate hack for this exact scenario, which is to somehow lessen contribution from thin objects. It's been a while so I don't remember the details, but it should be in the paper yes? Or is it a different version of the same paper? Ok yeah, glancing through it they have "thickness heuristic" hack for this exact scenario. Is this not working?

Edited by Frantic PonE

##### Share on other sites
3 hours ago, Hodgman said:

Yeah I've seen some research with layered depth (but can't think of the paper names....) ﻿

##### Share on other sites
17 hours ago, Frantic PonE said:

The original GTAO paper actually has a separate hack for this exact scenario, which is to somehow lessen contribution from thin objects. It's been a while so I don't remember the details, but it should be in the paper yes? Or is it a different version of the same paper? Ok yeah, glancing through it they have "thickness heuristic" hack for this exact scenario. Is this not working?

Yes, I tried to implement the described heuristic, but it didn't change the resulting occlusion that much in the above case for me (maybe I did something wrong, I'll probably need to give it another try)

Multiview AO looks really interesting, I already thought about this myself, if it would be possible to reuse other depth information that is already present in the form of shadow-maps to fill in some of the otherwise absent AO. I will definitely have to look through that paper.

Thanks everyone.

##### Share on other sites
Posted (edited)

Forgot to mention one thing about the heuristic why I think it would not work properly, already for only a slightly modified scenario from the one above. If I would make the "thin pole" object in the above scene a very "thin wall" like object (just like the other one already in the scene) but with the same orientation and just barely touching the ground, it would still cast exactly the same occlusion even when using the heuristic as I understand it from the paper. That's because the heuristic depends on the assumption that an object's depth is very similar to its screen-space width, which would not be the case for the mentioned geometry. So this is not really a viable solution for me. 😞

Edited by cgWolf

##### Share on other sites
Posted (edited)

What leaves me worried is the following excerpt from the presentation notes and the corresponding slides (43-45)

Quote

We can calculate the ambient occlusion integral as a double integral in polar coordinates.

The inner integral integrates the visibility for a slice of the hemisphere, as you can see in the left,
and the outer integral swipes this slice to cover the full hemisphere.

The simplest solution would be to just numerically solve both integrals.

But the solution we chosen, horizon-based ambient occlusion, which was introduced by Louis Bavoil in 2008,
made the key observation that the occlusion as pictured here can’t happen when working with height fields.

Using height-fields we would never be able to tell that the areas in green here, are actually visible.

The key consequence of this, is that we can just search for the two horizons h1 and h2 and that captures all the visibility information that can be extracted from a height map, for a given slice.

I had a look at the presentations & pdf provided by Louis Bavoil in 2008 but was unable to find any details to confirm or disprove the above statement in their material. But my many failed attempts at finding a solution for this problem leave me wondering if I'm trying to solve the impossible (using z-buffers as the only source of information about the scene) ... i.e. figure out the green area(s) in the above picture.

Edited by cgWolf

##### Share on other sites
Posted (edited)

Yeah it's impossible unless you depth slice a ton or whatever.

Point is SSAO is for very small radii ambient occlusion, one where you can just assume it's a heightfield and probably be correct. .2 or .1 meter radius, depending on if you're first or third person, depending on if your using human character standard units, is typically the max radius you'd want to use for SSAO, at least IMO, some people use more, but the larger you go the more obvious the errors are.

Edited by Frantic PonE

## Create an account

Register a new account

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 10
• 9
• 9
• 34
• 16
• ### Forum Statistics

• Total Topics
634123
• Total Posts
3015655
×