• Create Account

## SSAO using line integrals

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

16 replies to this topic

### #1PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 08:49 AM

Hello everyone !

I am currently implementing the SSAO technique used in Toy Story III in my rendering pipeline.
This technique is thoroughly described in the slides from Siggraph 2010.

I had no problem implementing it, until I met an annoying banding issue, due to a high occlusion factor appearing on certain edges of the meshes, mainly on concave shapes.
After several days of eye-bleeding and struggling with black lines, I've come to think this is a natural flaw from this algorithm but would the wisdom of gamedev have any idea about how the guys from Toy Story solved it ?

PS : this capture shows the problem in action : the SSAO quality is quite good (without any filtering) but the arches suffer from banding.

### #2ATEFred  Members

1615
Like
0Likes
Like

Posted 08 June 2012 - 11:30 AM

Apply a depth aware (bilateral) blur over it, and you're sorted . You have to do that with most SSAO techniques due to the limited amount of samples you can take during occlusion computation.

### #3PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 12:38 PM

I intentionally disabled the blurring passes to show the banding artifact.
Even if the filter is applied, the black lines on the curved surfaces remain (blurred of course but still present).

Edited by PixelSmasher, 08 June 2012 - 12:54 PM.

### #4jameszhao00  Members

271
Like
0Likes
Like

Posted 08 June 2012 - 12:57 PM

Edit: Incorrect post.

Edited by jameszhao00, 08 June 2012 - 01:19 PM.

### #5PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 01:16 PM

The bands indeed appear specifically on edges between non-coplanar surfaces.

I thought about using the normals to discard edges forming an angle bigger than a given threshold, but I don't want to access the normal buffer to keep good perfs (and the Toy Story guys didn't use it after all !)

### #6jameszhao00  Members

271
Like
0Likes
Like

Posted 08 June 2012 - 02:15 PM

Intuitively the bands are suppose to exist there. The problem is how intense they should be.

I'm not quite sure in the specific line integral implementation, but ambient occlusion should be with calculated using projected solid angle (i.e. cos(theta)/ndotl). Is the toy story slide accounting for the projected solid angle? It seems that they use occluded volume / total volume for the AO value.

Edited by jameszhao00, 08 June 2012 - 02:16 PM.

### #7PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 02:41 PM

The line integrals method doesn't rely on the classical "ray-casting(-marching)" inside a sphere.
Here the volume integration is based on the contribution of the heightfield (the depth buffer) to the sphere volume.

Simplified, the formula looks like this
ao = 0
for each Samples
{
Sample.contribution = (Sample.depth - SphereCenter.depth) * Sample.volume
ao += Sample.contribution
}
// + insert fancy code to offset/scale the ao into [0,1]


As you can see, the notion of solid angle (or angle itself!) doesn't exist here.

I'm currently trying to detect surface continuity using cheap normal reconstruction but I'm getting cheap results

Edited by PixelSmasher, 08 June 2012 - 02:56 PM.

### #8jameszhao00  Members

271
Like
0Likes
Like

Posted 08 June 2012 - 02:58 PM

The line integrals method doesn't rely on the classical "ray-casting(-marching)" inside a sphere.
Here the volume integration is based on the contribution of the heightfield (the depth buffer) to the sphere volume.

Simplified, the formula looks like this

ao = 0
for each Samples
{
Sample.contribution = (Sample.depth - SphereCenter.depth) * Sample.volume
ao += Sample.contribution
}
// + insert fancy code to offset/scale the ao into [0,1]


As you see, the notion of solid angle (or angle itself!) doesn't exist here.

I'm currently trying to detect surface continuity using cheap normal reconstruction but I'm getting cheap results

Using projected solid angle makes obstruction near the horizon not matter much. In your case, you're having problems with slightly concave areas, which is essentially obstruction near the horizon. Not saying that projected solid angles will fix the issue, but it at least exaggerates the issue.

Anyways, using projected solid angle is similar to weighting slight obstructions (angle(n, l) approaches pi/2) less than big obstructions (angle(n, l) approaches 0).

Edited by jameszhao00, 08 June 2012 - 02:59 PM.

### #9PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 03:34 PM

Weighting slight obstructions is achieved by the value attributed to Sample.volume (used before in the algorithm)

Sample.volume is the volume of the sphere above and below the area that the sample covers (search for pretty Voronoi pictures in the slides)
The further the sample is from the center, the smaller is the sphere height and so is the sample volume.

Edit : I may have misused these weights. I'm going to explore in that direction.

Edited by PixelSmasher, 08 June 2012 - 04:16 PM.

### #10jameszhao00  Members

271
Like
0Likes
Like

Posted 08 June 2012 - 04:51 PM

Imagine the shading point as being on the geometric boundary. As we decrease the angle between the two sides (i.e. fold it like a book), sphere obstruction (analytical) increases linearly. The ambient occlusion doesn't.

Edited by jameszhao00, 08 June 2012 - 05:33 PM.

### #11PixelSmasher  Members

445
Like
0Likes
Like

Posted 08 June 2012 - 06:17 PM

I see your point but I have to disagree.

Let's imagine this book completely open (the angle between the pages is Pi).
If we fold it slightly, the depth difference would mainly benefit to the volume of the sample furthest from the shading point... multiplied by the lowest weight.

If we reach a smaller angle (let's say Pi/2), the contribution of the furthest sample would have reached its maximum value and a sample closer to the shading point would start its real contribution, weighted with a higher value.

This behaviour is not linear and I feel it mimics the contribution of the projected solid angle.

----------------------------

I tried some low value detection on the occlusion coefficients and obtained this image
It is not a satisfactory result as a lot of subtle details are lost.

Edited by PixelSmasher, 08 June 2012 - 06:35 PM.

### #12jameszhao00  Members

271
Like
1Likes
Like

Posted 08 June 2012 - 06:49 PM

Is the 'weight' the vertical volume in that slice?
If so, you're still approximating AO with spherical obstruction (albeit sample based)...

------

I think you have to take into account normals to compensate for those bands.

Pretend you're looking at only the depth buffer. You have no way to differentiate between a hard edge and a smooth edge, as the smooth edge appears hard in the depth buffer. The interpolated shading normal is where you get the 'oh it's a smooth edge' information.

Also if you discard, don't forget to scale the resulting stuff (total volume is no longer volume of a sphere)

Edited by jameszhao00, 08 June 2012 - 06:52 PM.

### #13PixelSmasher  Members

445
Like
0Likes
Like

Posted 11 June 2012 - 02:07 AM

Is the 'weight' the vertical volume in that slice?
If so, you're still approximating AO with spherical obstruction (albeit sample based)...

Yes it is, and I have to agree with you .

I eventually tried some smooth-edge detection based on normal difference. The results are not that great, the GPU bandwidth has exploded and the slightest idea of performance is dead on the battlefield
(Yeah I know it should be painless in a proper deferred pipeline, but I have to integrate this code in an odd, bloated and insulting pipeline where outputting normals in the G-Buffer is prohibited).

I'm still working on it.

### #14PixelSmasher  Members

445
Like
0Likes
Like

Posted 12 June 2012 - 02:40 PM

After another day of math tips and tricks, here is the latest version of my SSAO (after a bilateral blur pass).
With good per-scene adjustements, I think this one will do.

Edited by PixelSmasher, 10 July 2012 - 03:02 AM.

### #15johnchapman  Members

597
Like
0Likes
Like

Posted 14 June 2012 - 05:55 AM

Looks excellent. What's the performance of this method (just roughly?)

### #16PixelSmasher  Members

445
Like
0Likes
Like

Posted 21 June 2012 - 12:08 PM

I get good results on PC (5 ms with an unoptimized version).

On console, which is the release target, I get disastrous perf (8 ms on a 720p resolution), mainly due to a low texture cache.
I'm currently working on aggressive optimizations and will update this post when I'll get a shippable implementation.

### #17PixelSmasher  Members

445
Like
0Likes
Like

Posted 10 July 2012 - 02:46 AM

Hi again and sorry for the late answer.
After several optimizations, loop unrolling and code vectorization, I get awesome results with the same image produced in 2.36ms for a 720p resolution !
• AO computation pass : 1.5 ms
• Horizontal bilateral blur : 0.43 ms
• Vertical bilateral blur : 0.43 ms
Hints :
- The GL_AMD_texture_texture4 extension (allowing to quickly fetch 4 adjacent pixels) was available on the target console. Always use it (or fetch4 or textureGather or whatever with the same behaviour) and smile while looking at your perf counter.
- Kill loops with fire ! They are horridly slow ! (Look at the code generated by your shader compiler to see if they were not unrolled at compil time)

Edited by PixelSmasher, 11 July 2012 - 07:12 AM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.