Jump to content
  • Advertisement
Sign in to follow this  

Implementing SAO

This topic is 1310 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to implement SAO in according to this paper: http://graphics.cs.williams.edu/papers/SAOHPG12/

 

The issue is the SAO texture looks dosn't look as I would expect it to. Here are some screenshots, before blurring:

 

[attachment=24029:sao_noblur1.png]

 

[attachment=24030:sao_noblur2.png]

 

After horizontal + vertical blurring:

 

[attachment=24032:sao_blur1.png]

 

[attachment=24033:sao_blur2.png]

 

Look at the second screenshot for example, it just dosn't look right? The thick, black lines in particular and some areas are completely white where I expect them to be dark.

 

The SAO shader code is virtually the same as the linked paper ones, with a few alternations, 

#ifndef SSAO_PIXEL_HLSL
#define SSAO_PIXEL_HLSL

#include "Constants.h"
#include "Common.hlsl"

static const float gKernelSize = 11.0;
static const float gRadius = 1.0;
static const float gRadius2 = gRadius * gRadius;
static const float gProjScale = 500.0;
static const float gNumSpiralTurns = 7;
static const float gBias = 0.012;
static const float gIntensity = 1.0;


cbuffer SSAOCBuffer : register(CBUFFER_REGISTER_PIXEL)
{
    float4x4 gViewProjMatrix;
    float4x4 gProjMatrix;
    float4x4 gViewMatrix;
    float2 gScreenSize;
};

Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION);
SamplerState gPointSampler : register(SAMPLER_REGISTER_POINT);


float3 reconstructNormal(float3 positionWorldSpace)
{
    return normalize(cross(ddx(positionWorldSpace), ddy(positionWorldSpace)));
}

float3 getOffsetPosition(int2 ssC, float2 unitOffset, float ssR) {
    // Derivation:
    //  mipLevel = floor(log(ssR / MAX_OFFSET));

    // TODO: mip levels
    int mipLevel = 0; //TODO: clamp((int)floor(log2(ssR)) - LOG_MAX_OFFSET, 0, MAX_MIP_LEVEL);

    int2 ssP = int2(ssR*unitOffset) + ssC;

    float3 P;

    // Divide coordinate by 2^mipLevel
    P = gPositionTexture.Load(int3(ssP >> mipLevel, mipLevel)).xyz;
    P = mul(gViewMatrix, float4(P, 1.0)).xyz;

    return P;
}

float2 tapLocation(int sampleNumber, float spinAngle, out float ssR)
{
    // Radius relative to ssR
    float alpha = float(sampleNumber + 0.5) * (1.0 / gNumSamples);
    float angle = alpha * (gNumSpiralTurns * 6.28) + spinAngle;

    ssR = alpha;
    return float2(cos(angle), sin(angle));
}

float sampleAO(uint2 screenSpacePos, float3 originPos, float3 normal, float ssDiskRadius, int tapIndex, float randomPatternRotationAngle)
{
    float ssR;
    float2 unitOffset = tapLocation(tapIndex, randomPatternRotationAngle, ssR);
    ssR *= ssDiskRadius;

    // The occluding point in camera space
    float3 Q = getOffsetPosition(screenSpacePos, unitOffset, ssR);

    float3 v = Q - originPos;

    float vv = dot(v, v);
    float vn = dot(v, normal);

    const float epsilon = 0.01;
    float f = max(gRadius2 - vv, 0.0); 
    
    return f * f * f * max((vn - gBias) / (epsilon + vv), 0.0);
}

float4 ps_main(float4 position : SV_Position) : SV_Target0
{
    uint2 screenSpacePos = position.xy;

    float3 originPos = gPositionTexture[screenSpacePos].xyz;
    originPos = mul(gViewMatrix, float4(originPos, 1.0)).xyz;
    float3 normal = reconstructNormal(originPos);

    // Hash function used in the HPG12 AlchemyAO paper
    float randomPatternRotationAngle = (3 * screenSpacePos.x ^ screenSpacePos.y + screenSpacePos.x * screenSpacePos.y) * 10;
    float ssDiskRadius = -gProjScale * gRadius / originPos.z;

    float ao = 0.0;
    for (int i = 0; i < gNumSamples; i++)
    {
        ao += sampleAO(screenSpacePos, originPos, normal, ssDiskRadius, i, randomPatternRotationAngle);
    }

    float temp = gRadius2 * gRadius;
    ao /= temp * temp;

    float A = max(0.0, 1.0 - ao * gIntensity * (5.0 / gNumSamples));

    return A;
}

#endif

Any ideas what could cause it?

Edited by KaiserJohan

Share this post


Link to post
Share on other sites
Advertisement

Try to lower intensity and tune up the fallof function to be bit less harsh. Try to raise radius a bit too. I have implemented SAO with great result so algorithm should be good.

Share this post


Link to post
Share on other sites

Check your normals. Usually shadowed edges on convex shape are caused by bad dot product eg bad normals.

Share this post


Link to post
Share on other sites

Check your normals. Usually shadowed edges on convex shape are caused by bad dot product eg bad normals.

 

It seems like reconstructing the normals gets screwed up, for example, this cube has no normal map and thus looks bogus:

 

[attachment=24056:cube.png]

 

I tried to use the normals if a normal map was available, so the lionhead in this screenshot looks correct for example, while the pillars marked in red looks very wrong:

 

[attachment=24057:ssao2.png]

 

Something must be wrong when I reconstruct the normals for the models without normal map; but afaik, it should be legit?

Share this post


Link to post
Share on other sites

Important question is why you try to do everything in worldspace? Its costly to transform every sample to worldspace to viewspace(could be done faster with prepass) and world space can have floating point accuracy problems. Original algorithm work in viewspace so subtle bug migh be caused by when moving from view->world.

Share this post


Link to post
Share on other sites

I'll eventually move to view space; for the moment I'll try this with sampling positions/texture in world space. 

 

Could this really be a precision issue?

Share this post


Link to post
Share on other sites

You use ddx and ddy function to build normal from heightmap right ?

Try to negate one component or the others. Also double check that x and y are properly reconstructed from gl_FragCoord and that you don't use linear sampling on depth buffer too if you use the same depth buffer size as your framebuffer.

Share this post


Link to post
Share on other sites

So I've been pulling my hair out debugging some more, specifically looking into the normals.

 

Heres one shot, first resulting SAO texture, then the normal texture (world space), and positions (world space):

 

[attachment=24137:sao.jpg]

 

[attachment=24138:normal.jpg]

 

[attachment=24142:pos.jpg]

 

Second shot, same procedure:

 

[attachment=24140:sao2.jpg]

 

[attachment=24141:normal2.jpg]

 

[attachment=24143:pos2.jpg]

 

Can anyone spot any obvious errors in the normals? They look OK to me, so I cant see how they are a potential problem? It is confounding to me that the SAO on the cube is pitch-black except for a circle-shape in the middle of the cube face - what would cause this?

 

Some minor altercations to the code, mainly loading normals from the texture:

#ifndef SSAO_PIXEL_HLSL
#define SSAO_PIXEL_HLSL

#include "Constants.h"
#include "Common.hlsl"

static const float gNumSamples = 11.0;
static const float gRadius = 0.2;
static const float gRadius2 = gRadius * gRadius;
static const float gProjScale = 500.0;
static const float gNumSpiralTurns = 7;
static const float gBias = 0.01;
static const float gIntensity = 1.0;


cbuffer SSAOCBuffer : register(CBUFFER_REGISTER_PIXEL)
{
    float4x4 gViewProjMatrix;
    float4x4 gProjMatrix;
    float4x4 gViewMatrix;
    float2 gScreenSize;
};

Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION);
Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL);
SamplerState gPointSampler : register(SAMPLER_REGISTER_POINT);


float3 reconstructNormal(float3 positionWorldSpace)
{
    return normalize(cross(ddx(positionWorldSpace), ddy(positionWorldSpace)));
}

/** Read the camera - space position of the point at screen - space pixel ssP + unitOffset * ssR.Assumes length(unitOffset) == 1 */
float3 getOffsetPosition(int2 ssC, float2 unitOffset, float ssR) {
    // Derivation:
    //  mipLevel = floor(log(ssR / MAX_OFFSET));

    // TODO: mip levels
    int mipLevel = 0; //TODO: clamp((int)floor(log2(ssR)) - LOG_MAX_OFFSET, 0, MAX_MIP_LEVEL);

    int2 ssP = int2(ssR*unitOffset) + ssC;

    float3 P = gPositionTexture[ssP].xyz;

    // Divide coordinate by 2^mipLevel
    //P = gPositionTexture.Load(int3(ssP >> mipLevel, mipLevel)).xyz;
    P = mul(gViewMatrix, float4(P, 1.0)).xyz;

    return P;
}

float2 tapLocation(int sampleNumber, float spinAngle, out float ssR)
{
    // Radius relative to ssR
    float alpha = float(sampleNumber + 0.5) * (1.0 / gNumSamples);
    float angle = alpha * (gNumSpiralTurns * 6.28) + spinAngle;

    ssR = alpha;
    return float2(cos(angle), sin(angle));
}

float sampleAO(uint2 screenSpacePos, float3 originPos, float3 normal, float ssDiskRadius, int tapIndex, float randomPatternRotationAngle)
{
    float ssR;
    float2 unitOffset = tapLocation(tapIndex, randomPatternRotationAngle, ssR);
    ssR *= ssDiskRadius;

    // The occluding point in camera space
    float3 Q = getOffsetPosition(screenSpacePos, unitOffset, ssR);

    float3 v = Q - originPos;

    float vv = dot(v, v);
    float vn = dot(v, normal);

    const float epsilon = 0.01;
    float f = max(gRadius2 - vv, 0.0); 
    
    return f * f * f * max((vn - gBias) / (epsilon + vv), 0.0);
}

float4 ps_main(float4 position : SV_Position) : SV_Target0
{
    uint2 screenSpacePos = (uint2)position.xy;

    float3 originPos = gPositionTexture[screenSpacePos].xyz;
    originPos = mul(gViewMatrix, float4(originPos, 1.0)).xyz;
    float3 normal = gNormalTexture[screenSpacePos].xyz;//reconstructNormal(originPos);
    normal = mul(gViewMatrix, float4(normal, 0.0)).xyz;

    // Hash function used in the HPG12 AlchemyAO paper
    float randomPatternRotationAngle = (3 * screenSpacePos.x ^ screenSpacePos.y + screenSpacePos.x * screenSpacePos.y) * 10;
    float ssDiskRadius = -gProjScale * gRadius / originPos.z;

    float ao = 0.0;
    for (int i = 0; i < gNumSamples; i++)
    {
        ao += sampleAO(screenSpacePos, originPos, normal, ssDiskRadius, i, randomPatternRotationAngle);
    }

    float temp = gRadius2 * gRadius;
    ao /= temp * temp;

    float A = max(0.0, 1.0 - ao * gIntensity * (5.0 / gNumSamples));
    //float A = 1.0 - ao / (4.0 * float(gNumSamples));
    //A = clamp(pow(ao, 1.0 + gIntensity), 0.0, 1.0);

    // Bilateral box-filter over a quad for free, respecting depth edges
    // (the difference that this makes is subtle)
    if (abs(ddx(originPos.z)) < 0.02) {
        A -= ddx(A) * ((screenSpacePos.x & 1) - 0.5);
    }
    if (abs(ddy(originPos.z)) < 0.02) {
        A -= ddy(A) * ((screenSpacePos.y & 1) - 0.5);
    }


    return A;
}

#endif
Edited by KaiserJohan

Share this post


Link to post
Share on other sites
The face of your cube should have uniform color in normal visualisation (because cube faces are flat any normals points to the same direction and thus their pixel color should be the same).
Here it looks like it's smoothed.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!