• 12
• 12
• 9
• 10
• 13

# SSAO problem with self-occlusion

This topic is 2589 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey guys,

I am struggling with SSAO's self occlusion and can't get rid off it. Just look at the code:
 float occlusion = 0.0f; float3 pixelNormal = tex2D(normalsAndDepthsSampler, input.texCoord).xyz; float pixelDepth = tex2D(normalsAndDepthsSampler, input.texCoord).w; for (int i = 0; i < 2; i++) { float3 randomVector = tex2D(randomVectorsSampler, input.texCoord * (7.0f + (float)i)).xyz; randomVector = randomVector * 2.0f - 1.0f; for (int j = 0; j < 8; j++) { float3 offsetVector = reflect(kernelVectors[j], randomVector) * float3(radius / pixelDepth, radius / pixelDepth, radius / pixelDepth); float3 samplePosition = float3(input.texCoord, pixelDepth) - offsetVector; float sampleDepth = tex2D(normalsAndDepthsSampler, samplePosition.xy).w; occlusion += occlusionFunction1(samplePosition.z - sampleDepth); } } 
This code will do a typical SSAO with self-occlusion (ssao1.png). Depths and stored in view-space, and so are the normals (note I do not use normals here yet). The idea is pretty simple. We have a point at depth pixelDepth, we generate a random offsetVector and apply the offset the position of pixelDepth (stored in samplePosition). Next, we sample the offseted depth. So the offset vector is actually a sort of an "arrow" that points from samplePosition to some random position in space. Finally, there is occlusionFunction coming in, which checks whether this "arrow" is over (samplePosition.z < sampleDepth; the smaller the value the closer to the camera we are) or under (samplePosition.z > sampleDepth) the surface.
Occlusion function works likes this:
 occlusionFunction1[x] if (x < 0.0f || x > 1.0f) return 0.0f else return 1.0f - x 

To help solve the self-occlusion (which occurs since around half of the offset vectors point under the surface) I decided to simply flip the offset vectors, based on the normal vector at point pixelDepth:
 float occlusion = 0.0f; float3 pixelNormal = tex2D(normalsAndDepthsSampler, input.texCoord).xyz; float pixelDepth = tex2D(normalsAndDepthsSampler, input.texCoord).w; for (int i = 0; i < 2; i++) { float3 randomVector = tex2D(randomVectorsSampler, input.texCoord * (7.0f + (float)i)).xyz; randomVector = randomVector * 2.0f - 1.0f; for (int j = 0; j < 8; j++) { float3 offsetVector = reflect(kernelVectors[j], randomVector) * float3(radius / pixelDepth, radius / pixelDepth, radius / pixelDepth); if (dot(pixelNormal, offsetVector) < 0) offsetVector = -offsetVector; float3 samplePosition = float3(input.texCoord, pixelDepth) - offsetVector; float sampleDepth = tex2D(normalsAndDepthsSampler, samplePosition.xy).w; occlusion += occlusionFunction1(samplePosition.z - sampleDepth); } } 
So it is just about flipping the sign if the dot product is < 0. However, this does not work (ssao2.png).

I have run out of ideas. Everything is in view space so there should be no problem. Why are these side faces of the "room" so dark?

##### Share on other sites
It seems like there is a problem with your normals. Are you sure, you're not rendering normal maps inside your normal-buffer for performing screen-space ambient occlusion? Though I can't say for sure your code is alright, as I haven't run through it.

##### Share on other sites
Have you tried using a dot product between the offset vector and a mirrored normal vector instead?

Like this:

 float occlusion = 0.0f; float3 pixelNormal = tex2D(normalsAndDepthsSampler, input.texCoord).xyz; float pixelDepth = tex2D(normalsAndDepthsSampler, input.texCoord).w; for (int i = 0; i < 2; i++) { float3 randomVector = tex2D(randomVectorsSampler, input.texCoord * (7.0f + (float)i)).xyz; randomVector = randomVector * 2.0f - 1.0f; float3 mirroredNormal = reflect(pixelNormal, randomVector) * float3(radius / pixelDepth, radius / pixelDepth, radius / pixelDepth); for (int j = 0; j < 8; j++) { float3 offsetVector = reflect(kernelVectors[j], randomVector) * float3(radius / pixelDepth, radius / pixelDepth, radius / pixelDepth); if (dot(mirroredNormal, offsetVector) < 0) offsetVector = -offsetVector; float3 samplePosition = float3(input.texCoord, pixelDepth) - offsetVector; float sampleDepth = tex2D(normalsAndDepthsSampler, samplePosition.xy).w; occlusion += occlusionFunction1(samplePosition.z - sampleDepth); } } 

I'm doing something like this in my SSAO shader and it works just fine. You may have to play with it though, your shader layout is quite different than mine. Also, you may or may not need to scale the mirrored normal with the radius. On that note you might want to do the "radius / pixelDepth" divisions only once and use the result, I'd think you could save a bit of time doing it that way.

Anywho, let me know if it works.

##### Share on other sites
Well, using the mirrored normal even wrosens the whole thing . Buy I think that I know what is wrong, more or less. I use normal vector which is in view-space, and so are the offset vectors flipped in view-space. Then I subtract these view-space offset vectors:
 float3 samplePosition = float3(input.texCoord, pixelDepth) - offsetVector; 
This works only for surfaces that directly face the camera. For all other it scres up. For example, when there is a wall that has (1, 0, 0) vector, then offsetVector will certainly be directed on the +X half-plane, but it can point anywhere in Y and Z directions. So if it has positive Z value, then samplePosition.z can end up being in front or behind the depth at positon samplePosition.xy, so self-occlusion is still there.

To do this more robustly, I decided to work in the actual view-space. I have positions, normals and depths in view-space (of course position.z == depth in this case). The code looks this:
 float occlusion = 0.0f; float3 pixelPosition = tex2D(positionsSampler, input.texCoord).xyz; float3 pixelNormal = tex2D(normalsAndDepthsSampler, input.texCoord).xyz; float pixelDepth = -tex2D(normalsAndDepthsSampler, input.texCoord).w; for (int i = 0; i < 2; i++) { float3 randomVector = tex2D(randomVectorsSampler, input.texCoord * (7.0f + (float)i)).xyz; randomVector = randomVector * 2.0f - 1.0f; for (int j = 0; j < 8; j++) { float3 offsetVector = reflect(kernelVectors[j], randomVector); if (dot(pixelNormal, offsetVector) < 0.0f) offsetVector *= -1.0f; float3 samplePosition = pixelPosition + offsetVector * radius * pixelDepth / 10.0f; samplePosition.z *= -1.0f; samplePosition = projectToScreen(samplePosition); // .z remains unchanged float sampleDepth = -tex2D(normalsAndDepthsSampler, samplePosition.xy).w; occlusion += occlusionFunction0(samplePosition.z - sampleDepth); } } 
(these all minuses are because now my view-space depth buffer holds negative values, as I work in right-handed coordinate space)
So, I do add the offset vector to the actual position in view-space, so it should be ok. All offset vectors are now pointing in the "half-direction" of pixelNormal. Later on I project onto the screen space the samplePosition (which is in view-space) and simply compare samplePosition.z with whats in the depth buffer at samplePosition.xy. Unfortunately, this also does not work as expected and I have no idea why. When we have a plane, and all vectors point in the direction of pixelNormal, then none of these offseted points (samplePosition) could penetrate the plane. But they actually do so... Does anyone have an idea what I could be doing wrong? See the attached image. Some planes look totally self-occluded!

##### Share on other sites
I think I have found a good solution to the problem. The inner loop could look this:
 float3 offsetVector = reflect(kernelVectors1[j], randomVector); if (dot(pixelNormal, offsetVector) < 0.0f) offsetVector *= -1.0f; offsetVector += 0.25f*pixelNormal; // CHANGE float3 samplePosition = pixelPosition + offsetVector * radius * pixelDepth / 10.0f; samplePosition.z *= -1.0f; samplePosition = projectToScreen(samplePosition); float sampleDepth = -tex2D(normalsAndDepthsSampler, samplePosition.xy).w; occlusion += occlusionFunction0(samplePosition.z - sampleDepth); 
Note that I add a scaled version of pixelNormal to the offsetVector. This way all offset vectors are father from the surface and the self-penetration is gone.

So okay, that is the solution. But I still do not understand why without having this extra addition the self-occlusion occurs. Is this due to floating-point problems? I noticed that short offset vectors cause much more trouble. I will not be able to focus on anything else unitl I finally fully understand the proble so please, help me to sleep without having this in the back in my mind .