Jump to content
  • Advertisement
Sign in to follow this  
ArKano22

SSAO no halo artifacts.

This topic is 3401 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i´m the ssao guy again :P. I´ve come up with a method to avoid ssao halo artifacts almost to the point of disappearing. I highly doubt it hasn´t been made before but it is so good i don´t get why it isn´t being used in that case. I call it "depth extrapolation" and it´s pretty simple: Halos appear because of big discontinuities in the depth buffer. If your occlusion function decays when depth starts to increase, the halos will be white, if not, your halos will be black. These artifacts would disappear if we were able to use two different depth buffers, one for 'first layer' depth and the second one containing a second layer of depth. This can be achieved using depth peeling but it is a very expensive technique since you have to render your scene several times. So we can´t get that second depth, but we can guess it. To do so , when we encounter a discontinuity, we reverse the sampling 'direction' and we switch depths when comparing depths, effectively performing a linear extrapolation. Some images will help: How do we know when we have encountered a discontinuity? when the depth difference is "too big". In my implementation consider that "too big" means > 0, and to get the final occlusion at the discontinuity border, I interpolate the original value with the corrected value. The bigger the gap, the more "correction" we apply. This removes halos almost completely, no matter how hardcore they were on your implementation.Here´s an example using my "gauss SSAO" (see http://www.gamedev.net/community/forums/topic.asp?topic_id=550452), which now looks almost like regular AO, only using the depth buffer: The additional cost of this technique depends on how you are taking your samples. The worst case is that it will multiply by 1.5 aprox the number of samples taken, the best case is when your sampling is not randomized and you can store samples the first time you take them, then by adding some math to do the extrapolation, the technique is almost free! (no additional texture fetches). Halos in SSAO are gone forever! wohooo! [Edited by - ArKano22 on October 18, 2009 4:32:03 PM]

Share this post


Link to post
Share on other sites
Advertisement
The same image as before, but without halo removal:



Gauss SSAO code complete with halo removal:

uniform sampler2D som; // Depth texture
uniform sampler2D rand; // Random texture
uniform vec2 camerarange = vec2(1.0, 1024.0);

float pw = 1.0/800.0*0.5;
float ph = 1.0/600.0*0.5;

float readDepth(in vec2 coord)
{
if (coord.x<0||coord.y<0) return 1.0;
float nearZ = camerarange.x;
float farZ =camerarange.y;
float posZ = texture2D(som, coord).x;
return (2.0 * nearZ) / (nearZ + farZ - posZ * (farZ - nearZ));
}

float compareDepths(in float depth1, in float depth2,inout int far)
{

float diff = (depth1 - depth2)*100; //depth difference (0-100)
float gdisplace = 0.2; //gauss bell center
float garea = 2.0; //gauss bell width 2

//reduce left bell width to avoid self-shadowing
if (diff<gdisplace){
garea = 0.1;
}else{
far = 1;
}
float gauss = pow(2.7182,-2*(diff-gdisplace)*(diff-gdisplace)/(garea*garea));

return gauss;
}

float calAO(float depth,float dw, float dh)
{
float temp = 0;
float temp2 = 0;
float coordw = gl_TexCoord[0].x + dw/depth;
float coordh = gl_TexCoord[0].y + dh/depth;
float coordw2 = gl_TexCoord[0].x - dw/depth;
float coordh2 = gl_TexCoord[0].y - dh/depth;

if (coordw < 1.0 && coordw > 0.0 && coordh < 1.0 && coordh > 0.0){
vec2 coord = vec2(coordw , coordh);
vec2 coord2 = vec2(coordw2, coordh2);
int far = 0;
temp = compareDepths(depth, readDepth(coord),far);

//DEPTH EXTRAPOLATION:
if (far > 0){
temp2 = compareDepths(readDepth(coord2),depth,far);
temp += (1.0-temp)*temp2;
}
}

return temp;
}

void main(void)
{
//randomization texture:
vec2 fres = vec2(20,20);
vec3 random = texture2D(rand, gl_TexCoord[0].st*fres.xy);
random = random*2.0-vec3(1.0);

//initialize stuff:
float depth = readDepth(gl_TexCoord[0]);
float ao = 0.0;

for(int i=0; i<4; ++i)
{
//calculate color bleeding and ao:
ao+=calAO(depth, pw, ph);
ao+=calAO(depth, pw, -ph);
ao+=calAO(depth, -pw, ph);
ao+=calAO(depth, -pw, -ph);

ao+=calAO(depth, pw*1.2, 0);
ao+=calAO(depth, -pw*1.2, 0);
ao+=calAO(depth, 0, ph*1.2);
ao+=calAO(depth, 0, -ph*1.2);

//sample jittering:
pw += random.x*0.0007;
ph += random.y*0.0007;

//increase sampling area:
pw *= 1.7;
ph *= 1.7;
}

//final values, some adjusting:
vec3 finalAO = vec3(1.0-(ao/32.0));


gl_FragColor = vec4(0.3+finalAO*0.7,1.0);
}






EDIT: This technique makes some assumptions about the shape of the scene, but most of the time we can assume that a surface that is flat/round/whatever at some point, will be the same a few pixels away.

[Edited by - ArKano22 on October 18, 2009 2:07:58 PM]

Share this post


Link to post
Share on other sites
Hi "SSAO guy again" :-)

Your post does look interesting...
I've not implemented ssao yet, so do you mind telling me:
- what article/tutorial to read to get started

I presume this post comes after a first tutorial... is that right?

Share this post


Link to post
Share on other sites
Quote:
Original post by ddlox
Hi "SSAO guy again" :-)

Your post does look interesting...
I've not implemented ssao yet, so do you mind telling me:
- what article/tutorial to read to get started

I presume this post comes after a first tutorial... is that right?


Hi!

Yes, to make use of what i´ve posted you need to grasp the basics first. There are several ways of implementing ssao, for me there are 3 methods:

-sampling zbuffer directly (the one i use)
-reconstructing position from depth and sampling in 3D (Crytek´s method, i think)
-methods that use zbuffer+normals buffer (high quality, usually slow).

There are lots of tutorials and posts about it, just to get you started:
http://www.gamedev.net/community/forums/topic.asp?topic_id=463075
http://www.iquilezles.org/www/articles/ssao/ssao.htm
http://developer.nvidia.com/object/siggraph-2008-HBAO.html (this one is very high-quality but slow)

Share this post


Link to post
Share on other sites
I have recently done some testing with a similar but slightly different method, but decided to steer away from it. The problem is that you make the assumption about the shape of theh object in the background - that it is flat. If you had an object with more curvature in the background you would see some fairly incorrect results as the scene moves around, since the linear extrapolation incorrectly assumes there is no additional curvature. The technique presented works pretty well with the Sponza scene since the background is almost always flat (with circular pillars in the foreground).

Instead, most implementations that I have seen test each sample for being in range or not, and then either tossing them out or setting the result of that sample to a default value. This also removes the majority of the halos as well, and works for any shape scene. Have you tried something along these lines?

EDIT: One other thing I forgot to mention - this issue will have trouble at the edges of very sharp view space angles, like the edges of a cylinder or sphere due to the normal direction being perpendicular to the view direction. This leads to periodic sparkles at the edges when moving around. I see some of this in the sample images, does it change over camera position?

Share this post


Link to post
Share on other sites
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.

Share this post


Link to post
Share on other sites
Quote:
Original post by nicmenz
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.


I think that if you have normal maps it will also work since you´re taking samples from "behind" and extrapolating them, that means that the surface you´re 'inventing' to fill the halos has the same appearance as the surfaces near the sampling point.

Jason Z (hi! :P) said that not taking into account the samples when they are outside the sampling area removes halos but that´s not completely true. If you´re using normals then it reduces halos because planar surfaces do not get self occlusion and that *hides* the halo (white vs white), but when using only depth buffer it tends to make the problem worse. The second picture I posted shows that, there i´m tossing out samples but white halos still appear.
That´s because in some borders you don´t get occlusion due to the objects behind and if you toss the samples taken "below" the gap, then you have (0/samples = 0.0) occlusion and the border is completely white.(sounds confusing but i can give examples)

Usually i don´t use normal maps when implementing SSAO because i feel it´s not worth it. Most of the time you can´t really spot where SSAO is affecting the image because of textures and lighting.

Btw, i tried another scene with a lot of curved shapes and it looks ok. Still, i saw these small "flickering" spots Jason Z mentioned, and i think they can be alleviated by making the sampling pattern more random. I will post a video to show the thing in action.

Share this post


Link to post
Share on other sites
Here´s a video showing this stuff on movement:



I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.

Share this post


Link to post
Share on other sites
Quote:
Original post by ArKano22
Here´s a video showing this stuff on movement:



I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!