Sign in to follow this  
PixelSmasher

SSAO using line integrals

Recommended Posts

PixelSmasher    445
Hello everyone !

I am currently implementing the SSAO technique used in Toy Story III in my rendering pipeline.
This technique is thoroughly described in the slides from [url="http://advances.realtimerendering.com/s2010/index.html"]Siggraph 2010[/url].

I had no problem implementing it, until I met an annoying banding issue, due to a high occlusion factor appearing on certain edges of the meshes, mainly on concave shapes.
After several days of eye-bleeding and struggling with black lines, I've come to think this is a natural flaw from this algorithm but would the wisdom of gamedev have any idea about how the guys from Toy Story solved it ?

Thanks in advance.

PS : this capture shows the problem in action : the SSAO quality is quite good (without any filtering) but the arches suffer from banding.
[img]http://i.imgur.com/CJQjz.jpg[/img]

Share this post


Link to post
Share on other sites
ATEFred    1700
Apply a depth aware (bilateral) blur over it, and you're sorted :). You have to do that with most SSAO techniques due to the limited amount of samples you can take during occlusion computation.

Share this post


Link to post
Share on other sites
PixelSmasher    445
I intentionally disabled the blurring passes to show the banding artifact.
Even if the filter is applied, the black lines on the curved surfaces remain (blurred of course but still present). Edited by PixelSmasher

Share this post


Link to post
Share on other sites
PixelSmasher    445
The bands indeed appear specifically on edges between non-coplanar surfaces.

I thought about using the normals to discard edges forming an angle bigger than a given threshold, but I don't want to access the normal buffer to keep good perfs (and the Toy Story guys didn't use it after all !)

Share this post


Link to post
Share on other sites
jameszhao00    271
Intuitively the bands are suppose to exist there. The problem is how intense they should be.

I'm not quite sure in the specific line integral implementation, but ambient occlusion should be with calculated using projected solid angle (i.e. cos(theta)/ndotl). Is the toy story slide accounting for the projected solid angle? It seems that they use occluded volume / total volume for the AO value. Edited by jameszhao00

Share this post


Link to post
Share on other sites
PixelSmasher    445
The line integrals method doesn't rely on the classical "ray-casting(-marching)" inside a sphere.
Here the volume integration is based on the contribution of the heightfield (the depth buffer) to the sphere volume.

Simplified, the formula looks like this
[CODE]
ao = 0
for each Samples
{
Sample.contribution = (Sample.depth - SphereCenter.depth) * Sample.volume
ao += Sample.contribution
}
// + insert fancy code to offset/scale the ao into [0,1]
[/CODE]

As you can see, the notion of solid angle (or angle itself!) doesn't exist here.


I'm currently trying to detect surface continuity using cheap normal reconstruction but I'm getting cheap results [img]http://public.gamedev.net//public/style_emoticons/default/unsure.png[/img] Edited by PixelSmasher

Share this post


Link to post
Share on other sites
jameszhao00    271
[quote name='PixelSmasher' timestamp='1339188105' post='4947472']
The line integrals method doesn't rely on the classical "ray-casting(-marching)" inside a sphere.
Here the volume integration is based on the contribution of the heightfield (the depth buffer) to the sphere volume.

Simplified, the formula looks like this
[CODE]
ao = 0
for each Samples
{
Sample.contribution = (Sample.depth - SphereCenter.depth) * Sample.volume
ao += Sample.contribution
}
// + insert fancy code to offset/scale the ao into [0,1]
[/CODE]

As you see, the notion of solid angle (or angle itself!) doesn't exist here.


I'm currently trying to detect surface continuity using cheap normal reconstruction but I'm getting cheap results [img]http://public.gamedev.net//public/style_emoticons/default/unsure.png[/img]
[/quote]
Using projected solid angle makes obstruction near the horizon not matter much. In your case, you're having problems with slightly concave areas, which is essentially obstruction near the horizon. Not saying that projected solid angles will fix the issue, but it at least exaggerates the issue.

Anyways, using projected solid angle is similar to weighting slight obstructions (angle(n, l) approaches pi/2) less than big obstructions (angle(n, l) approaches 0). Edited by jameszhao00

Share this post


Link to post
Share on other sites
PixelSmasher    445
Weighting slight obstructions is achieved by the value attributed to Sample.volume (used before in the algorithm)

Sample.volume is the volume of the sphere above and below the area that the sample covers (search for pretty Voronoi pictures in the slides)
The further the sample is from the center, the smaller is the sphere height and so is the sample volume.

Edit : I may have misused these weights. I'm going to explore in that direction. Edited by PixelSmasher

Share this post


Link to post
Share on other sites
jameszhao00    271
Imagine the shading point as being on the geometric boundary. As we decrease the angle between the two sides (i.e. fold it like a book), sphere obstruction (analytical) increases linearly. The ambient occlusion doesn't. Edited by jameszhao00

Share this post


Link to post
Share on other sites
PixelSmasher    445
I see your point but I have to disagree.

Let's imagine this book completely open (the angle between the pages is Pi).
If we fold it slightly, the depth difference would mainly benefit to the volume of the sample furthest from the shading point... multiplied by the lowest weight.

If we reach a smaller angle (let's say Pi/2), the contribution of the furthest sample would have reached its maximum value and a sample closer to the shading point would start its real contribution, weighted with a higher value.

This behaviour is not linear and I feel it mimics the contribution of the projected solid angle.

----------------------------

I tried some low value detection on the occlusion coefficients and obtained this image
It is not a satisfactory result as a lot of subtle details are lost.
[img]http://i.imgur.com/wagcc.png[/img] Edited by PixelSmasher

Share this post


Link to post
Share on other sites
jameszhao00    271
Is the 'weight' the vertical volume in that slice?
If so, you're still approximating AO with spherical obstruction (albeit sample based)...

------

I think you have to take into account normals to compensate for those bands.

Pretend you're looking at only the depth buffer. You have no way to differentiate between a hard edge and a smooth edge, as the smooth edge appears hard in the depth buffer. The interpolated shading normal is where you get the 'oh it's a smooth edge' information.

Also if you discard, don't forget to scale the resulting stuff (total volume is no longer volume of a sphere) Edited by jameszhao00

Share this post


Link to post
Share on other sites
PixelSmasher    445
[quote name='jameszhao00' timestamp='1339202969' post='4947538']
Is the 'weight' the vertical volume in that slice?
If so, you're still approximating AO with spherical obstruction (albeit sample based)...
[/quote]
Yes it is, and I have to agree with you [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img].

I eventually tried some smooth-edge detection based on normal difference. The results are not that great, the GPU bandwidth has exploded and the slightest idea of performance is dead on the battlefield
(Yeah I know it should be painless in a proper deferred pipeline, but I have to integrate this code in an odd, bloated and insulting pipeline where outputting normals in the G-Buffer is prohibited).

I'm still working on it.

Share this post


Link to post
Share on other sites
PixelSmasher    445
After another day of math tips and tricks, here is the latest version of my SSAO (after a bilateral blur pass).
With good per-scene adjustements, I think this one will do.

[img]http://i.imgur.com/has6Y.png[/img] Edited by PixelSmasher

Share this post


Link to post
Share on other sites
PixelSmasher    445
I get good results on PC (5 ms with an unoptimized version).

On console, which is the release target, I get disastrous perf (8 ms on a 720p resolution), mainly due to a low texture cache.
I'm currently working on aggressive optimizations and will update this post when I'll get a shippable implementation.

Share this post


Link to post
Share on other sites
PixelSmasher    445
Hi again and sorry for the late answer.
After several optimizations, loop unrolling and code vectorization, I get awesome results with the same image produced in 2.36ms for a 720p resolution ![list]
[*]AO computation pass : [b]1.5 ms[/b]
[*]Horizontal bilateral blur : [b]0.43 ms[/b]
[*]Vertical bilateral blur : [b]0.43 ms[/b]
[/list]
Hints :
- The GL_AMD_texture_texture4 extension (allowing to quickly fetch 4 adjacent pixels) was available on the target console. Always use it (or fetch4 or textureGather or whatever with the same behaviour) and smile while looking at your perf counter.
- Kill loops with fire ! They are horridly slow ! (Look at the code generated by your shader compiler to see if they were not unrolled at compil time) Edited by PixelSmasher

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this