Sign in to follow this  
ArKano22

SSAO no halo artifacts.

Recommended Posts

ArKano22    650
Hi, i´m the ssao guy again :P. I´ve come up with a method to avoid ssao halo artifacts almost to the point of disappearing. I highly doubt it hasn´t been made before but it is so good i don´t get why it isn´t being used in that case. I call it "depth extrapolation" and it´s pretty simple: Halos appear because of big discontinuities in the depth buffer. If your occlusion function decays when depth starts to increase, the halos will be white, if not, your halos will be black. These artifacts would disappear if we were able to use two different depth buffers, one for 'first layer' depth and the second one containing a second layer of depth. This can be achieved using depth peeling but it is a very expensive technique since you have to render your scene several times. So we can´t get that second depth, but we can guess it. To do so , when we encounter a discontinuity, we reverse the sampling 'direction' and we switch depths when comparing depths, effectively performing a linear extrapolation. Some images will help: How do we know when we have encountered a discontinuity? when the depth difference is "too big". In my implementation consider that "too big" means > 0, and to get the final occlusion at the discontinuity border, I interpolate the original value with the corrected value. The bigger the gap, the more "correction" we apply. This removes halos almost completely, no matter how hardcore they were on your implementation.Here´s an example using my "gauss SSAO" (see http://www.gamedev.net/community/forums/topic.asp?topic_id=550452), which now looks almost like regular AO, only using the depth buffer: The additional cost of this technique depends on how you are taking your samples. The worst case is that it will multiply by 1.5 aprox the number of samples taken, the best case is when your sampling is not randomized and you can store samples the first time you take them, then by adding some math to do the extrapolation, the technique is almost free! (no additional texture fetches). Halos in SSAO are gone forever! wohooo! [Edited by - ArKano22 on October 18, 2009 4:32:03 PM]

Share this post


Link to post
Share on other sites
ArKano22    650
The same image as before, but without halo removal:



Gauss SSAO code complete with halo removal:

uniform sampler2D som; // Depth texture
uniform sampler2D rand; // Random texture
uniform vec2 camerarange = vec2(1.0, 1024.0);

float pw = 1.0/800.0*0.5;
float ph = 1.0/600.0*0.5;

float readDepth(in vec2 coord)
{
if (coord.x<0||coord.y<0) return 1.0;
float nearZ = camerarange.x;
float farZ =camerarange.y;
float posZ = texture2D(som, coord).x;
return (2.0 * nearZ) / (nearZ + farZ - posZ * (farZ - nearZ));
}

float compareDepths(in float depth1, in float depth2,inout int far)
{

float diff = (depth1 - depth2)*100; //depth difference (0-100)
float gdisplace = 0.2; //gauss bell center
float garea = 2.0; //gauss bell width 2

//reduce left bell width to avoid self-shadowing
if (diff<gdisplace){
garea = 0.1;
}else{
far = 1;
}
float gauss = pow(2.7182,-2*(diff-gdisplace)*(diff-gdisplace)/(garea*garea));

return gauss;
}

float calAO(float depth,float dw, float dh)
{
float temp = 0;
float temp2 = 0;
float coordw = gl_TexCoord[0].x + dw/depth;
float coordh = gl_TexCoord[0].y + dh/depth;
float coordw2 = gl_TexCoord[0].x - dw/depth;
float coordh2 = gl_TexCoord[0].y - dh/depth;

if (coordw < 1.0 && coordw > 0.0 && coordh < 1.0 && coordh > 0.0){
vec2 coord = vec2(coordw , coordh);
vec2 coord2 = vec2(coordw2, coordh2);
int far = 0;
temp = compareDepths(depth, readDepth(coord),far);

//DEPTH EXTRAPOLATION:
if (far > 0){
temp2 = compareDepths(readDepth(coord2),depth,far);
temp += (1.0-temp)*temp2;
}
}

return temp;
}

void main(void)
{
//randomization texture:
vec2 fres = vec2(20,20);
vec3 random = texture2D(rand, gl_TexCoord[0].st*fres.xy);
random = random*2.0-vec3(1.0);

//initialize stuff:
float depth = readDepth(gl_TexCoord[0]);
float ao = 0.0;

for(int i=0; i<4; ++i)
{
//calculate color bleeding and ao:
ao+=calAO(depth, pw, ph);
ao+=calAO(depth, pw, -ph);
ao+=calAO(depth, -pw, ph);
ao+=calAO(depth, -pw, -ph);

ao+=calAO(depth, pw*1.2, 0);
ao+=calAO(depth, -pw*1.2, 0);
ao+=calAO(depth, 0, ph*1.2);
ao+=calAO(depth, 0, -ph*1.2);

//sample jittering:
pw += random.x*0.0007;
ph += random.y*0.0007;

//increase sampling area:
pw *= 1.7;
ph *= 1.7;
}

//final values, some adjusting:
vec3 finalAO = vec3(1.0-(ao/32.0));


gl_FragColor = vec4(0.3+finalAO*0.7,1.0);
}






EDIT: This technique makes some assumptions about the shape of the scene, but most of the time we can assume that a surface that is flat/round/whatever at some point, will be the same a few pixels away.

[Edited by - ArKano22 on October 18, 2009 2:07:58 PM]

Share this post


Link to post
Share on other sites
ddlox    168
Hi "SSAO guy again" :-)

Your post does look interesting...
I've not implemented ssao yet, so do you mind telling me:
- what article/tutorial to read to get started

I presume this post comes after a first tutorial... is that right?

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by ddlox
Hi "SSAO guy again" :-)

Your post does look interesting...
I've not implemented ssao yet, so do you mind telling me:
- what article/tutorial to read to get started

I presume this post comes after a first tutorial... is that right?


Hi!

Yes, to make use of what i´ve posted you need to grasp the basics first. There are several ways of implementing ssao, for me there are 3 methods:

-sampling zbuffer directly (the one i use)
-reconstructing position from depth and sampling in 3D (Crytek´s method, i think)
-methods that use zbuffer+normals buffer (high quality, usually slow).

There are lots of tutorials and posts about it, just to get you started:
http://www.gamedev.net/community/forums/topic.asp?topic_id=463075
http://www.iquilezles.org/www/articles/ssao/ssao.htm
http://developer.nvidia.com/object/siggraph-2008-HBAO.html (this one is very high-quality but slow)

Share this post


Link to post
Share on other sites
Jason Z    6434
I have recently done some testing with a similar but slightly different method, but decided to steer away from it. The problem is that you make the assumption about the shape of theh object in the background - that it is flat. If you had an object with more curvature in the background you would see some fairly incorrect results as the scene moves around, since the linear extrapolation incorrectly assumes there is no additional curvature. The technique presented works pretty well with the Sponza scene since the background is almost always flat (with circular pillars in the foreground).

Instead, most implementations that I have seen test each sample for being in range or not, and then either tossing them out or setting the result of that sample to a default value. This also removes the majority of the halos as well, and works for any shape scene. Have you tried something along these lines?

EDIT: One other thing I forgot to mention - this issue will have trouble at the edges of very sharp view space angles, like the edges of a cylinder or sphere due to the normal direction being perpendicular to the view direction. This leads to periodic sparkles at the edges when moving around. I see some of this in the sample images, does it change over camera position?

Share this post


Link to post
Share on other sites
nicmenz    169
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by nicmenz
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.


I think that if you have normal maps it will also work since you´re taking samples from "behind" and extrapolating them, that means that the surface you´re 'inventing' to fill the halos has the same appearance as the surfaces near the sampling point.

Jason Z (hi! :P) said that not taking into account the samples when they are outside the sampling area removes halos but that´s not completely true. If you´re using normals then it reduces halos because planar surfaces do not get self occlusion and that *hides* the halo (white vs white), but when using only depth buffer it tends to make the problem worse. The second picture I posted shows that, there i´m tossing out samples but white halos still appear.
That´s because in some borders you don´t get occlusion due to the objects behind and if you toss the samples taken "below" the gap, then you have (0/samples = 0.0) occlusion and the border is completely white.(sounds confusing but i can give examples)

Usually i don´t use normal maps when implementing SSAO because i feel it´s not worth it. Most of the time you can´t really spot where SSAO is affecting the image because of textures and lighting.

Btw, i tried another scene with a lot of curved shapes and it looks ok. Still, i saw these small "flickering" spots Jason Z mentioned, and i think they can be alleviated by making the sampling pattern more random. I will post a video to show the thing in action.

Share this post


Link to post
Share on other sites
ArKano22    650
Here´s a video showing this stuff on movement:

http://www.youtube.com/watch?v=8xk5Hr1KQFs&feature=player_embedded

I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.

Share this post


Link to post
Share on other sites
dgreen02    1428
Quote:
Original post by ArKano22
Here´s a video showing this stuff on movement:

http://www.youtube.com/watch?v=8xk5Hr1KQFs&feature=player_embedded

I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by dgreen02
Quote:
Original post by ArKano22
Here´s a video showing this stuff on movement:

http://www.youtube.com/watch?v=8xk5Hr1KQFs&feature=player_embedded

I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan


Hi Dan, my max_Z is set to 1000. If you´re using the opengl internal zbuffer, then it is probable that your problem has to do with it not being linear. That means your depth values are tightly packed together near the camera, and they begin to space apart when you move away from the camera.

Share this post


Link to post
Share on other sites
dgreen02    1428
Quote:
Original post by ArKano22
Quote:
Original post by dgreen02
Quote:
Original post by ArKano22
Here´s a video showing this stuff on movement:

http://www.youtube.com/watch?v=8xk5Hr1KQFs&feature=player_embedded

I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan


Hi Dan, my max_Z is set to 1000. If you´re using the opengl internal zbuffer, then it is probable that your problem has to do with it not being linear. That means your depth values are tightly packed together near the camera, and they begin to space apart when you move away from the camera.


Ah, well I'm using Direct3D/HLSL...but you're right I'm not using a linear Z buffer. That will probably fix it, thanks!

- Dan

Share this post


Link to post
Share on other sites
Jason Z    6434
Quote:
Original post by ArKano22
Quote:
Original post by nicmenz
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.


I think that if you have normal maps it will also work since you´re taking samples from "behind" and extrapolating them, that means that the surface you´re 'inventing' to fill the halos has the same appearance as the surfaces near the sampling point.

Jason Z (hi! :P) said that not taking into account the samples when they are outside the sampling area removes halos but that´s not completely true. If you´re using normals then it reduces halos because planar surfaces do not get self occlusion and that *hides* the halo (white vs white), but when using only depth buffer it tends to make the problem worse. The second picture I posted shows that, there i´m tossing out samples but white halos still appear.
That´s because in some borders you don´t get occlusion due to the objects behind and if you toss the samples taken "below" the gap, then you have (0/samples = 0.0) occlusion and the border is completely white.(sounds confusing but i can give examples)

Usually i don´t use normal maps when implementing SSAO because i feel it´s not worth it. Most of the time you can´t really spot where SSAO is affecting the image because of textures and lighting.

Btw, i tried another scene with a lot of curved shapes and it looks ok. Still, i saw these small "flickering" spots Jason Z mentioned, and i think they can be alleviated by making the sampling pattern more random. I will post a video to show the thing in action.
Hello Arkano. There are ways to deal with tossing out a sample - you can either use a default occlusion value for that sample, or you simply subtract one from the number of samples used. Both cases work pretty good, but you could even do something more elaborate based on the normal vector as you mention.

In fact, that is the similar related technique that I mentioned in my last post - I used the normal vector to determine an extrapolated position that would appear behind the foreground object if it wasn't there any more. However, the same issues apply to this type of solution that I mentioned about your technique - there are issues with sparkles around the object edges which really are noticeable in certain situations.

Don't get me wrong, it is an interesting technique and is very simple as well. However, everyone should understand the pros and cons of using it, just like all algorithms. If you can completely eliminate the sparkles without spending too much performance then I think you will be on to something good!

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by Jason Z
Hello Arkano. There are ways to deal with tossing out a sample - you can either use a default occlusion value for that sample, or you simply subtract one from the number of samples used. Both cases work pretty good, but you could even do something more elaborate based on the normal vector as you mention.

In fact, that is the similar related technique that I mentioned in my last post - I used the normal vector to determine an extrapolated position that would appear behind the foreground object if it wasn't there any more. However, the same issues apply to this type of solution that I mentioned about your technique - there are issues with sparkles around the object edges which really are noticeable in certain situations.

Don't get me wrong, it is an interesting technique and is very simple as well. However, everyone should understand the pros and cons of using it, just like all algorithms. If you can completely eliminate the sparkles without spending too much performance then I think you will be on to something good!


Hi Jason!

I understand what you say :). Like all algorithms it has its string points and its weak ones, that should also be known. I wasn´t able to completely eliminate the sparkles, but i will try a bit harder.

Using a default value does reduce halos, but only when there is no occusion behind the foreground.However there´s one thing that i don´t get, i wasn´t able to reduce halos by tossing out samples. .Here´s the reason i think causes this:


EDIT: The 0/4 at the right should be 0/2, sorry.
As you can see, the two sampling point get incorrect occlusion, even when dicarding samples. The right one registers 0 occlusion which results in a white halo (you can see this effect on my pic) and the left one results in a faint dark halo (not very noticeable). As i said using normals alleviates this problem because the background, if its flat, has 0 occlusion at every pixel which hides the white halo. But still near the edge the occlusion values are incorrect. This is what i think, maybe i´m missing something...:S

Share this post


Link to post
Share on other sites
Jason Z    6434
Quote:
Original post by ArKano22
Hi Jason!

I understand what you say :). Like all algorithms it has its string points and its weak ones, that should also be known. I wasn´t able to completely eliminate the sparkles, but i will try a bit harder.

Using a default value does reduce halos, but only when there is no occusion behind the foreground.However there´s one thing that i don´t get, i wasn´t able to reduce halos by tossing out samples. .Here´s the reason i think causes this:


EDIT: The 0/4 at the right should be 0/2, sorry.
As you can see, the two sampling point get incorrect occlusion, even when dicarding samples. The right one registers 0 occlusion which results in a white halo (you can see this effect on my pic) and the left one results in a faint dark halo (not very noticeable). As i said using normals alleviates this problem because the background, if its flat, has 0 occlusion at every pixel which hides the white halo. But still near the edge the occlusion values are incorrect. This is what i think, maybe i´m missing something...:S

You are correct about the tossing out samples, it makes the whole thing more susceptible to incorrect occlusion calculations if the whole sampling sphere is not sampled properly (i.e. a portion is thrown out). But regarding your technique - what would happen if the background object was a sphere? The assumed linearity would cause a completely incorrect occlusion calculation, which is actually where your sparkles are coming from.

I suppose you could do some type of min/max operation to reduce the sparkles, or perhaps in your blur pass you could check for discontinuities and attenuate the value if it is larger than the surrounding area by a big amount. Are you doing blurring right now or no?

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by Jason Z
You are correct about the tossing out samples, it makes the whole thing more susceptible to incorrect occlusion calculations if the whole sampling sphere is not sampled properly (i.e. a portion is thrown out). But regarding your technique - what would happen if the background object was a sphere? The assumed linearity would cause a completely incorrect occlusion calculation, which is actually where your sparkles are coming from.

I suppose you could do some type of min/max operation to reduce the sparkles, or perhaps in your blur pass you could check for discontinuities and attenuate the value if it is larger than the surrounding area by a big amount. Are you doing blurring right now or no?


Well, if i were sampling inside a hemisphere oriented using the pixel normal then discarding samples would work I think. However in "2D" is not the case.

About having a sphere behind the foreground objects, then the extrapolation would yield an incorrect value, but still pretty close to the correct one, depending on the size of the sphere and the sampling radius. However the worst case i noticed is when behind the foreground is yet another depth discontinuity. Then the extrapolation is completely incorrect, and i think that case has no solution at all (not without depth peeling).

Right now i´m not blurring the result, i just take 32 samples and use a good jittering texture. Could you elaborate a bit more on the min/max operation you mention?

Share this post


Link to post
Share on other sites
Jason Z    6434
Its just off the top of my head, but the blur could be something along the lines of a bilateral filter where the range weights are based on the depth discontinuities, and of the samples where the range weights are above a certain threshold you would compare the occlusion values that have been calculated on those pixels. Then you could eliminate some of the outliers with a min/max deviation allowed. If there is a large deviation in occlusion values but not a large deviation in the depth values, then that can be used as a cue to remove the implausible occlusion.

That's my theory, but I haven't done any testing with it at the moment. In approximately 2 weeks I'll have a bunch more time to spend on it, so I'll check back in later on...

Share this post


Link to post
Share on other sites
NiGoea    104
Quote:
Original post by ArKano22




Hi ArKano22 !

I hope you will read this :D

First of all, I've read all your topics about SSAO and SSGI, and I love them !

I just want to say you I do not totally agree with your implementation about this fact: why a big flat surface should be half-shadowed, and edged should be brighter ?

I think this doesn't happen in reality, when it comes to boxes.

Do you think this implementation increases the final image quality ?

Another question: why the surface in the background is dark ?

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by NiGoea
Quote:
Original post by ArKano22




Hi ArKano22 !

I hope you will read this :D

First of all, I've read all your topics about SSAO and SSGI, and I love them !

I just want to say you I do not totally agree with your implementation about this fact: why a big flat surface should be half-shadowed, and edged should be brighter ?

I think this doesn't happen in reality, when it comes to boxes.

Do you think this implementation increases the final image quality ?

Another question: why the surface in the background is dark ?


Hi NiGoea! This SSAO I show here removes halos around objcts so yes, i think it improves the image quality quite a bit. The half-shadowed flat surfaces artifact is not due to my implementation, all zbuffer-only SSAO implementations suffer from it. It is because in an inclined plane, half the pixels around each sample are occluders and the other half are occludees. So the occlusion is occluders/(occluders+occludees) = 1/2 (aprox). This occlusion value changes depending on the view angle.

That also causes the bright edges, because they have 0 occlusion. You can remove all these artifacts taking per pixel normals into account, but that makes the technique more expensive. Nvidia had a really good implementation of this using horizon occlusion. I used normals too for another implementation i posted here a long time ago.

Another way of removing those artifacts if you want to use only zbuffer is tweaking the SSAO image contrast.

About the dark surface in the background, i´m not really sure what surface you are referring to. Could you point it out more clearly?

Share this post


Link to post
Share on other sites
NiGoea    104
Hi !!

I completely understood your explanation. I figured it out by myself few days ago, when I started using SSAO. I just thought you was using normal buffer and still obtaining that effect.
Got it.

Personally, I don't like your choice: it doesn't have any sense that the shadowing of a wall depends on the angle I'm looking at it.
What do you think ?

bye!

Share this post


Link to post
Share on other sites
ArKano22    650
Quote:
Original post by NiGoea
Hi !!

I completely understood your explanation. I figured it out by myself few days ago, when I started using SSAO. I just thought you was using normal buffer and still obtaining that effect.
Got it.

Personally, I don't like your choice: it doesn't have any sense that the shadowing of a wall depends on the angle I'm looking at it.
What do you think ?

bye!


Yep, i don´t like to use the normal buffer because i don´t want to expend much time calculating ssao in my games.

Again, the view-dependant shading is not something i chose to implement, it´s an issue almost all SSAO implementations have, even some with normal buffers. In the case of just-zbuffer, think what happens when you look directly to a flat surface: all samples have 0 occlusion. However the more you rotate the view, the more inclinated the flat surface is, thus you get more self-occlusion. Normals prevent this on flat surfaces, but the occlusion you get in correct areas is still view dependent.

The Nvidia implementation i mentioned must be view-independent though, because of the way they use the normals to calculate occlusion. Here it is:
http://developer.nvidia.com/object/siggraph-2008-HBAO.html

good luck with ssao! its a cool effect to have :)

Share this post


Link to post
Share on other sites
NiGoea    104
Quote:
Original post by ArKano22
Yep, i don´t like to use the normal buffer because i don´t want to expend much time calculating ssao in my games.


Ok, but in this way you will end up having walls that chance their shadowing as the player rotates...

Quote:
Original post by ArKano22

The Nvidia implementation i mentioned must be view-independent though, because of the way they use the normals to calculate occlusion. Here it is:
http://developer.nvidia.com/object/siggraph-2008-HBAO.html


I don't know what they mean with "horizon mapping". Do you have any code ?

---

Anyway, SSAO turned out to be SLOW.
The only way to obtain a decent speed, is to use a 3-4 times smaller buffer than the frame buffer. But this leads to appreciable results only if you don't use normal maps (for materials) to perburt your normal buffer, otherwise the resulting high-frequency normal buffer will produce high frequency SSAO values, which need a high SSAO buffer size => SLOW.

Which video card do you have, and how slow is your implementation ?

THANKS!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this