Talking about SSAO

Started by
3 comments, last by NiGoea 14 years, 5 months ago
Hi all! I've been studying SSAO for the last week, and I'd like to share with you some of my thoughts. 1 --- A common problem ---- First of all, let me say a presumptuous thing: most of the implementations I've seen, I consider to be wrong. I'm even talking about of methods proposed here, in this forum. Take a look at this
It seems to be working... but only apparently. Why are all the surfaces which have the normal perpendicular the viewer line-of-sight (that is, surfaces that are NOT facing the the viewer) darked ?? I noticed this thing to be present in many implementations. The problems is less noticeable here, but still present:
This addresses the problem nicely:
Obviously, in Crysis the problem is totaly absent (you can check it out on youtube) 2 ---- Sampling details ----- As far as I noticed, a problem in some implementations is the one showed in this picture: F is the current fragment S is the point to sample Problem is that zbuffer(F) = zbuffer(S) + D => z(F) > z(S) If one evaluate the occlusion contribute as the difference between z(F) and z(S), the more the plane is inclined, the more self-occlusion he will get. To resolve this, just use z(A) instead of z(F) Another interesting thing is this: the normals contribute. Why to use it to modulate the occlusion value ? 3 ---- Filtering ----- People tend to use a standard gaussian blurring. But it will lead to wrong results. Take a look at this picture: With a gauss blur, the right side would become smoothed. The white area is gonna become gray, and the gray area will become brighter => bleeding. The answer is to use a bilateral filter. Do you think a BF is too slow ? How is your implementation? Does someone have any suggestion ? 4 ---- Boundaries ----- For fragments near the border of the frame buffer, some samples can go beyond the boundary. How do you address this ?
Advertisement
Hi again :D

Yeah, most implementations are wrong because SSAO is the approximation of an approximation :P. Ideally one would have GI and that´s all.

However most artifacts SSAO introduce can be eliminated, but it costs processing power. It´s a tradeoff between speed and visual quality.


About the self occlusion problem you are trying to solve, all three videos you posted suffer from it. In the third one is just less noticeable. In fact it can´t be solved using only depth as your starting point, you need some info about the normals also. In your picture i can´t see why using z(A) should solve the problem, since z(A) and z(S) are the same unless you have multiple depth buffers, which is prohibitively expensive.

About the filtering you are just right. A bilateral filter is the way to go, unless you want to have your image all blurry at the edges which is ugly :D.

About the samples that go beyond the boundary, i use depth extrapolation just as implemented in the other thread you commented. Many people just ignore the sample, that is, if you´re taking 32 samples but 2 are taken outside, ignore them and do things like you were taking 30 samples instead.
Quote:Original post by ArKano22
Hi again :D

Yeah, most implementations are wrong because SSAO is the approximation of an approximation :P. Ideally one would have GI and that´s all.

However most artifacts SSAO introduce can be eliminated, but it costs processing power. It´s a tradeoff between speed and visual quality.


About the self occlusion problem you are trying to solve, all three videos you posted suffer from it. In the third one is just less noticeable. In fact it can´t be solved using only depth as your starting point, you need some info about the normals also. In your picture i can´t see why using z(A) should solve the problem, since z(A) and z(S) are the same unless you have multiple depth buffers, which is prohibitively expensive.

About the filtering you are just right. A bilateral filter is the way to go, unless you want to have your image all blurry at the edges which is ugly :D.

About the samples that go beyond the boundary, i use depth extrapolation just as implemented in the other thread you commented. Many people just ignore the sample, that is, if you´re taking 32 samples but 2 are taken outside, ignore them and do things like you were taking 30 samples instead.



HI :D

I're right ! I was wrong in my picture, I meant another problem, that is when you use z(F) instead of z(A) = z(S), which is what I'm now doing.

About the bilateral filter, I founded out that it worth using only if you have a large kernel, which (for now) I'm not using since I have a SSAO buffer much smaller than the frame buffer, 3-4 times.

Anyway, I going crazy with SSAO. It suffers of many problems.

The most annoying one is the flickering pattern you see when moving... the fragments of SSAO buffer have a random distribution, but blurring doesn't solve the problem totally. Not in my case, at least... because even with 2-3-4 blurring passes, when you are moving, the shading areas tend to change "shape".
Hope you've understood.

As of now, I'm using a normal+depth buffer SSAO, because I thought I can take advantage of the normal maps. And actually it works as you can see in the picture:



it helps to get the feel of the depth.
But it leads to a more noisy sampling...

To obtain a similar result without using the normal buffer, I thought I could use zbuffer only but perturbing it with materials height maps, which I'm already using to accomplish parallax mapping.
Don't know how to do for now.

Besides, 16 samples per pixel and SSAO of half resolution is damned slow.

thanks :D
Quote:Original post by NiGoea
About the bilateral filter, I founded out that it worth using only if you have a large kernel, which (for now) I'm not using since I have a SSAO buffer much smaller than the frame buffer, 3-4 times.

Anyway, I going crazy with SSAO. It suffers of many problems.


If your SSAO buffer is very small then blurring it is cheap and the results might be good... anyway it depends on how much time you want your app to expend with ssao...

It´s tricky to get a good implementation, I remember the first ones that people
wrote after seeing it in Crysis, they all looked like edge detection filters :D. However when you get it working reasonably well its worth the effort.

Quote:Original post by NiGoea
The most annoying one is the flickering pattern you see when moving... the fragments of SSAO buffer have a random distribution, but blurring doesn't solve the problem totally. Not in my case, at least... because even with 2-3-4 blurring passes, when you are moving, the shading areas tend to change "shape".
Hope you've understood.


That´s the view-dependant shading problem. Since you are calculating stuff in view space and not in world space, when you move the camera around, shading changes. :S

In my implementation it is not very noticeable because i use a very good random sampling texture, but it´s just luck.

Anyway, may the power of ssao be with you! :)

Quote:Original post by ArKano22
That´s the view-dependant shading problem. Since you are calculating stuff in view space and not in world space, when you move the camera around, shading changes. :S


Yes. I have this.
Apparently, there is no way to eliminate the 'flickering' problem... I even tried to use 32 samples per pixel, with full screen SSAO (damned slow), but the problem was still present although the result was perfect !

I would be happy to test some demo. Do you have an SSAO demo to kindly send me ?

This topic is closed to new replies.

Advertisement