Jump to content
  • Advertisement
Sign in to follow this  
#Include Graphics

SSAO banding problems with FOV variation

This topic is 1333 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, here its me again!

 

I have been braining(if that word exist...) around with a problem I am having with the SSAO filter I am using.

 

The problem appears when I reduce my fov. And I havent been able to reproduce the problem at Fovs above 45º( ~0.78 radians ). Also vary near and far plane doesnt affect so I discarded a malformed linear z.

 

Here is the code i used:

float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 position, in float3 cnorm)
{
   float3 pos = getViewPosition( tcoord + uv );
   
   float result = 0;
   if( (pos.z > 0) && position.z < fFarClipPlane )//General case - inside frustum
   {
      float3 diff = position - pos;
      if( (diff.z > fMinDiff) && (diff.z < fMaxDiff) )//Particular case
      {
         const float3 v = normalize(-diff);
         //const float d = length(diff)/**g_scale*/;
         result = /*max(0,length(diff)/50.0);/*/max(0.0,dot(cnorm,v)/*-g_bias*/)/*(1.0/(1.0+d))*/*g_intensity;
      }
   }
   return result;
}

float4 ps_main(PS_INPUT i) : COLOR
{
   float4 o = (float4)0;
   
   o.rgba = 1.0f;
   const float2 vec[16] = {
            float2( 4, 0),
            float2(-4, 0),
            float2( 0,    4),
            float2( 0,   -4),
            float2( 8, 0),
            float2(-8, 0),
            float2( 0,    8),
            float2( 0,   -8),
            float2( 12, 0),
            float2(-12, 0),
            float2( 0,    12),
            float2( 0,   -12),
            float2( 16,    0),
            float2(-16,    0),
            float2( 0,    16),
            float2( 0,   -16),
            };
            
   float4 sampledTex = tex2D( NormalSampler, i.uv);
   
   float3 normal = sampledTex.xyz;
   
   if( encodeNormal )
   {
      normal = decode( float3(normal.xy,0) );
   }
   else
   {
      normal = (normal*2.0f)-1.0f;
   }

   float3 position = getViewPosition(i.position, sampledTex.w);
   
   float2 rand = getRandom(i.uv);
   //position.z = normal.z;
   
   //**SSAO Calculation**//;
   int iterations = 4;
   
   float ao = 0.0f;
   float rad = 0;
   float2 sampleDistanceFactor= float2(0.707f,0.487f);

   //HERE I MANAGE FOV WITH THE RADIUS
   rad = g_sample_radDR*(1.0f-(fFOV*0.0174532f))/position.z;
   
   int totalSamples = 0;
   
   float result = 0;
   for (int j = 0; j < iterations; ++j)
   {
      float2 coord1 = reflect(vec[j],rand)*rad;
      float2 coord2 = float2(coord1.x - coord1.y, coord1.x + coord1.y)*sampleDistanceFactor;
       
      ao += doAmbientOcclusion(i.uv,coord1*0.25, position, normal);
      ao += doAmbientOcclusion(i.uv,coord2*0.5, position, normal);
      ao += doAmbientOcclusion(i.uv,coord1*0.75, position, normal);
      ao += doAmbientOcclusion(i.uv,coord2, position, normal);
   }

   ao/=(float)iterations*4.0f;
   
   //**END**//
   
   o = float4( o.rgb*(1-ao), 1.0f );
   
   return o;
}

I thought the problem might be with the apperture difference (fov) and the sampled radius but must be something more hidden...

 

Any ideas about? I think I am overwhelmed and frustrated with this, but will continue till find the thing is tricking around ;(

Edited by #Include Graphics

Share this post


Link to post
Share on other sites
Advertisement

To me it looks like depth artifacts, that is, by changing the fov, you move your object into a depth region with a bad resolution. Therefor check the following:

1. Resolution of depth buffer ?

2. Are you using linear depth ?

3. What does the getViewPosition function looks like ?

4. What happens if you only use the z-difference as base (use step/smoothstep) ?

Share this post


Link to post
Share on other sites

Hello Ashaman, thank you for answering!

 

I thought it was the thing you are talking about but i am sure not.

 

 

 


1. Resolution of depth buffer ?

2. Are you using linear depth ?

3. What does the getViewPosition function looks like ?

4. What happens if you only use the z-difference as base (use step/smoothstep) ?

 

1. 16 bit actually but tested 32 bit without any noticeable difference.

2. yes

3. I have different methods with same results but the simplest is just taking screen space position as position.xy and the linear depth as z.

4. Not really sure what actually step/smoothstep should difer from what i used...(lack of knowledge can be, althought I took a lookup to the HLSL api)

 

The thing is that yes, when I change the camera fov i have to reposition my camera in order to be in similar circunstances, but as long as my depth data is linear and the near and far plane is the same I subconsciently discard a z problem...

 

1. Do I can improve precision with step/smoothstep significantly?

2. If so, how can Ido it by using those functions?

3. If depth is linear, doesnt have, the z resolution, to remain the same as before in both experiments?

Edited by #Include Graphics

Share this post


Link to post
Share on other sites


16 bit actually

16bit for saving the depth is quite low (I dont mean the z-buffer!). Best to test the depth buffer first. Just output the z-value and compare the images. Do you see machband-effect or other artifacts when using a low fov value.

Share this post


Link to post
Share on other sites

Okay, you are definitely right. I were doing something wrong when I was testing 32 bit texture...

 

Cheers!

 

I switched to 16 bit because I couldnt gpu profile D3DFMT_A32B32G32R32F format texture filters with the intel gpa application.

 

If any user knows why this happens :P

Edited by #Include Graphics

Share this post


Link to post
Share on other sites

You can encode the depth in two 16 floats channels, this is sufficient in my engine. In combination with compressed normals, you need one 16f 4-channel texture to store the depth and normal.

Edited by Ashaman73

Share this post


Link to post
Share on other sites

One last question should be, in my case encoding normals and depth hasnt a negligible cost, but, when i decode them surprisingly am geting a very high gpu cost.

 

I am using a 660 gtx and at 60 frames managed to fulfill the capabilities haha.

 

On the other hand a 32 bit texture gives me the same results but with not added code/encode cost and the code is cleaner also too.

 

My graphics card still have a lot of process time to spare in other things with 32 bit texture.

 

The memory comsuption of one versus the other is the only difference between both?. Am I missing something? How can I optimize 16 bit texture algorithm...

 

My concerning case is about coding/encoding the depth, I am using your integer+float method Ashaman, its normal i should need a very optimized one because of sampling depth for at least 17 times each pixel.

Edited by #Include Graphics

Share this post


Link to post
Share on other sites

It is all about bandwidth. Using a single 4x16f bit texture for normal and depth uses 8 bytes per access, the same if you use eg 32f for depth and 2x16f for normal (thought two texture accesss). So, if you want to optimize, you should take a look at the bandwidth when using deferred rendering. 32f vs 16f is a matter of compatibility. Older hardware will have more issues with 32f (blending/multichannel support etc., thought only one channel and no blending is needed for depth). You need to decide which hardware you want to support, then take a look at the bandwidth.

 

The cost for encoding/decoding is negligible in my option, here's my encoding/decoding code (thought GLSL) for 2x16f float channels:

void packDepth
(
	in float _depth,
	inout vec4 _target
)
{

	float tmp = _depth * 2048.0;
	_target.z = floor(tmp)/2048.0;
	_target.w = fract(tmp);
}
 float unpackDepth( in vec4 _source)
 {
	return dot(abs(_source),vec4(0.0,0.0,1.0,1.0/2048.0));
 }

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!