SSAO no halo artifacts.

Started by
20 comments, last by NiGoea 14 years, 6 months ago
Quote:Original post by dgreen02
Quote:Original post by ArKano22
Here´s a video showing this stuff on movement:



I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan


Hi Dan, my max_Z is set to 1000. If you´re using the opengl internal zbuffer, then it is probable that your problem has to do with it not being linear. That means your depth values are tightly packed together near the camera, and they begin to space apart when you move away from the camera.
Advertisement
Quote:Original post by ArKano22
Quote:Original post by dgreen02
Quote:Original post by ArKano22
Here´s a video showing this stuff on movement:



I still think it looks pretty good so for the moment i´ll stick with this method to remove halos. If at some point i decide to use a normal buffer too then i´ll see how it performs there.


Great video, keep up the good work! I'm going to drop this implementation into my project and see how it looks :-D

I tried your SSGI the other day, it seems as though the output turns to white as the objects get close to the camera. What is your max_Z value set to in your perspective matrix ? Is it around ~100 ? My game has it set to 3800 I think it has something to do with why I loose all SSAO/SSGI on naer objects ( < 150 units to camera ) they fade to all white/no occlusion.

Let me know if you can think of a fix off the top of your head, I will try to take some time and investigate later.

- Dan


Hi Dan, my max_Z is set to 1000. If you´re using the opengl internal zbuffer, then it is probable that your problem has to do with it not being linear. That means your depth values are tightly packed together near the camera, and they begin to space apart when you move away from the camera.


Ah, well I'm using Direct3D/HLSL...but you're right I'm not using a linear Z buffer. That will probably fix it, thanks!

- Dan
Quote:Original post by ArKano22
Quote:Original post by nicmenz
great post, but assuming that the surface is flat seems to be a major disadvantage, especially when you include displacement maps and normal maps in the calculation of the occlusion factor (which greatly improves the appearance and detail). with high frequency displacement maps, you will almost never have a flat surface. rating++, though.


I think that if you have normal maps it will also work since you´re taking samples from "behind" and extrapolating them, that means that the surface you´re 'inventing' to fill the halos has the same appearance as the surfaces near the sampling point.

Jason Z (hi! :P) said that not taking into account the samples when they are outside the sampling area removes halos but that´s not completely true. If you´re using normals then it reduces halos because planar surfaces do not get self occlusion and that *hides* the halo (white vs white), but when using only depth buffer it tends to make the problem worse. The second picture I posted shows that, there i´m tossing out samples but white halos still appear.
That´s because in some borders you don´t get occlusion due to the objects behind and if you toss the samples taken "below" the gap, then you have (0/samples = 0.0) occlusion and the border is completely white.(sounds confusing but i can give examples)

Usually i don´t use normal maps when implementing SSAO because i feel it´s not worth it. Most of the time you can´t really spot where SSAO is affecting the image because of textures and lighting.

Btw, i tried another scene with a lot of curved shapes and it looks ok. Still, i saw these small "flickering" spots Jason Z mentioned, and i think they can be alleviated by making the sampling pattern more random. I will post a video to show the thing in action.
Hello Arkano. There are ways to deal with tossing out a sample - you can either use a default occlusion value for that sample, or you simply subtract one from the number of samples used. Both cases work pretty good, but you could even do something more elaborate based on the normal vector as you mention.

In fact, that is the similar related technique that I mentioned in my last post - I used the normal vector to determine an extrapolated position that would appear behind the foreground object if it wasn't there any more. However, the same issues apply to this type of solution that I mentioned about your technique - there are issues with sparkles around the object edges which really are noticeable in certain situations.

Don't get me wrong, it is an interesting technique and is very simple as well. However, everyone should understand the pros and cons of using it, just like all algorithms. If you can completely eliminate the sparkles without spending too much performance then I think you will be on to something good!

Quote:Original post by Jason Z
Hello Arkano. There are ways to deal with tossing out a sample - you can either use a default occlusion value for that sample, or you simply subtract one from the number of samples used. Both cases work pretty good, but you could even do something more elaborate based on the normal vector as you mention.

In fact, that is the similar related technique that I mentioned in my last post - I used the normal vector to determine an extrapolated position that would appear behind the foreground object if it wasn't there any more. However, the same issues apply to this type of solution that I mentioned about your technique - there are issues with sparkles around the object edges which really are noticeable in certain situations.

Don't get me wrong, it is an interesting technique and is very simple as well. However, everyone should understand the pros and cons of using it, just like all algorithms. If you can completely eliminate the sparkles without spending too much performance then I think you will be on to something good!


Hi Jason!

I understand what you say :). Like all algorithms it has its string points and its weak ones, that should also be known. I wasn´t able to completely eliminate the sparkles, but i will try a bit harder.

Using a default value does reduce halos, but only when there is no occusion behind the foreground.However there´s one thing that i don´t get, i wasn´t able to reduce halos by tossing out samples. .Here´s the reason i think causes this:


EDIT: The 0/4 at the right should be 0/2, sorry.
As you can see, the two sampling point get incorrect occlusion, even when dicarding samples. The right one registers 0 occlusion which results in a white halo (you can see this effect on my pic) and the left one results in a faint dark halo (not very noticeable). As i said using normals alleviates this problem because the background, if its flat, has 0 occlusion at every pixel which hides the white halo. But still near the edge the occlusion values are incorrect. This is what i think, maybe i´m missing something...:S
Quote:Original post by ArKano22
Hi Jason!

I understand what you say :). Like all algorithms it has its string points and its weak ones, that should also be known. I wasn´t able to completely eliminate the sparkles, but i will try a bit harder.

Using a default value does reduce halos, but only when there is no occusion behind the foreground.However there´s one thing that i don´t get, i wasn´t able to reduce halos by tossing out samples. .Here´s the reason i think causes this:


EDIT: The 0/4 at the right should be 0/2, sorry.
As you can see, the two sampling point get incorrect occlusion, even when dicarding samples. The right one registers 0 occlusion which results in a white halo (you can see this effect on my pic) and the left one results in a faint dark halo (not very noticeable). As i said using normals alleviates this problem because the background, if its flat, has 0 occlusion at every pixel which hides the white halo. But still near the edge the occlusion values are incorrect. This is what i think, maybe i´m missing something...:S

You are correct about the tossing out samples, it makes the whole thing more susceptible to incorrect occlusion calculations if the whole sampling sphere is not sampled properly (i.e. a portion is thrown out). But regarding your technique - what would happen if the background object was a sphere? The assumed linearity would cause a completely incorrect occlusion calculation, which is actually where your sparkles are coming from.

I suppose you could do some type of min/max operation to reduce the sparkles, or perhaps in your blur pass you could check for discontinuities and attenuate the value if it is larger than the surrounding area by a big amount. Are you doing blurring right now or no?
Quote:Original post by Jason Z
You are correct about the tossing out samples, it makes the whole thing more susceptible to incorrect occlusion calculations if the whole sampling sphere is not sampled properly (i.e. a portion is thrown out). But regarding your technique - what would happen if the background object was a sphere? The assumed linearity would cause a completely incorrect occlusion calculation, which is actually where your sparkles are coming from.

I suppose you could do some type of min/max operation to reduce the sparkles, or perhaps in your blur pass you could check for discontinuities and attenuate the value if it is larger than the surrounding area by a big amount. Are you doing blurring right now or no?


Well, if i were sampling inside a hemisphere oriented using the pixel normal then discarding samples would work I think. However in "2D" is not the case.

About having a sphere behind the foreground objects, then the extrapolation would yield an incorrect value, but still pretty close to the correct one, depending on the size of the sphere and the sampling radius. However the worst case i noticed is when behind the foreground is yet another depth discontinuity. Then the extrapolation is completely incorrect, and i think that case has no solution at all (not without depth peeling).

Right now i´m not blurring the result, i just take 32 samples and use a good jittering texture. Could you elaborate a bit more on the min/max operation you mention?
Its just off the top of my head, but the blur could be something along the lines of a bilateral filter where the range weights are based on the depth discontinuities, and of the samples where the range weights are above a certain threshold you would compare the occlusion values that have been calculated on those pixels. Then you could eliminate some of the outliers with a min/max deviation allowed. If there is a large deviation in occlusion values but not a large deviation in the depth values, then that can be used as a cue to remove the implausible occlusion.

That's my theory, but I haven't done any testing with it at the moment. In approximately 2 weeks I'll have a bunch more time to spend on it, so I'll check back in later on...
Quote:Original post by ArKano22




Hi ArKano22 !

I hope you will read this :D

First of all, I've read all your topics about SSAO and SSGI, and I love them !

I just want to say you I do not totally agree with your implementation about this fact: why a big flat surface should be half-shadowed, and edged should be brighter ?

I think this doesn't happen in reality, when it comes to boxes.

Do you think this implementation increases the final image quality ?

Another question: why the surface in the background is dark ?

Quote:Original post by NiGoea
Quote:Original post by ArKano22




Hi ArKano22 !

I hope you will read this :D

First of all, I've read all your topics about SSAO and SSGI, and I love them !

I just want to say you I do not totally agree with your implementation about this fact: why a big flat surface should be half-shadowed, and edged should be brighter ?

I think this doesn't happen in reality, when it comes to boxes.

Do you think this implementation increases the final image quality ?

Another question: why the surface in the background is dark ?


Hi NiGoea! This SSAO I show here removes halos around objcts so yes, i think it improves the image quality quite a bit. The half-shadowed flat surfaces artifact is not due to my implementation, all zbuffer-only SSAO implementations suffer from it. It is because in an inclined plane, half the pixels around each sample are occluders and the other half are occludees. So the occlusion is occluders/(occluders+occludees) = 1/2 (aprox). This occlusion value changes depending on the view angle.

That also causes the bright edges, because they have 0 occlusion. You can remove all these artifacts taking per pixel normals into account, but that makes the technique more expensive. Nvidia had a really good implementation of this using horizon occlusion. I used normals too for another implementation i posted here a long time ago.

Another way of removing those artifacts if you want to use only zbuffer is tweaking the SSAO image contrast.

About the dark surface in the background, i´m not really sure what surface you are referring to. Could you point it out more clearly?
Hi !!

I completely understood your explanation. I figured it out by myself few days ago, when I started using SSAO. I just thought you was using normal buffer and still obtaining that effect.
Got it.

Personally, I don't like your choice: it doesn't have any sense that the shadowing of a wall depends on the angle I'm looking at it.
What do you think ?

bye!

This topic is closed to new replies.

Advertisement