Deferred SSDO

Started by
4 comments, last by kalle_h 10 years, 10 months ago

Hello! I've been reading this page recently,
http://kayru.org/articles/dssdo/

The idea is quite straightforward and it's quite easy to replace existing ssao implementation. However, one thing really confused me was that the author said, 'the local occlusion function can be reconstructed by the dot product of occlusion coefficients and light vector'?

AFAIK, if calculating diffuse lighting, we also need to project incoming light to the shpere, then do a dot product between both coefficients.

Just wondering why a simple dot product between occlusion coefficient and light vector is sufficient in this case. Anyone can help explain the math behind it? Or is there any paper describing the detail/trick?

Thanks.

Advertisement

int g_num_lights;
float g_time;
float g_occlusion_radius;
float g_occlusion_max_distance;

float4x4 g_mat_view_inv;
float2 g_resolution;

sampler2D smp_position;
sampler2D smp_normal;
sampler2D smp_occlusion;

float4 ps_main(float2 tex : TEXCOORD0) : COLOR0
{      
   float4 res = 1;

   float4 albedo    = 1.0;
   float4 position  = tex2D(smp_position, tex);
   float4 normal    = tex2D(smp_normal, tex);
   float4 occlusion = tex2D(smp_occlusion, tex);
   float4 lighting = 0.0;

   float3 point_light_positions[3];

   float time = g_time*0.75;

   float light_rad = 2;
   
   point_light_positions[0] = float3(light_rad*sin(time),   light_rad*cos(time),    1);
   point_light_positions[1] = float3(light_rad*sin(time+1), light_rad*cos(-time+2), 1);
   point_light_positions[2] = float3(light_rad*sin(time+2), light_rad*cos(-time+1), 1);

   float3 point_light_colors[3];
   
   point_light_colors[0] = 2 * float3(1,0.2,0.2);
   point_light_colors[1] = 2 * float3(0.2,1,0.2);
   point_light_colors[2] = 2 * float3(0.2,0.2,1);

   for(int i=0; i<g_num_lights; i++ )
   {
      float3 light_pos = point_light_positions[i];
      float3 to_light = position.xyz - light_pos;
      float to_light_dist = length(to_light);
      float3 to_light_norm = to_light / to_light_dist;
      float light_occlusion = 1-saturate(dot(float4(-to_light_norm,1), occlusion));

      float dist_attenuation = 1 / (1+to_light_dist*to_light_dist);
      float ndl = max(0, dot(normal.xyz, -to_light_norm));
      lighting.rgb += point_light_colors[i]*light_occlusion*ndl* dist_attenuation;
   }

   //lighting += 0.25f * pow(1-saturate(occlusion.w),2); // ambient

   res.rgb = lighting.rgb * albedo.rgb;
      
   return res;
}

You can find shader that they are using at there https://github.com/kayru/dssdo/blob/master/dssdo.rfx

Simple dot term that they refer is that.


 float light_occlusion = 1-saturate(dot(float4(-to_light_norm,1), occlusion));

So it's just dot between normalized light dir and first 3 spherical harmonics term(directional part) and then just adding 4th term(constant part) to result.

Thanks for the info. SH is just used to reconstruct occlusion function. The dot product between occlusion function and light vector is to calculate dirrectional occlusion from the light. Makes sense now,

I had the same idea some time ago as well, but actually there's no need to encode the screen spaced approximated visibility function into SH basis. They do it because they need to pass over the function from one pass to the other (DSSDO -> Lighting).

It would be way better, if you just sampled the screen space along the direction of the light and lerp the lights color with the occluders color based on how sure the algorithm is, that the occluder is actually occluding the light. This way there's no need for any spherical harmonics, as you have the full screen space approximated visibility function (technically it's more than just a visibility function, as you can get chromatic information) and you can use that and put it into your rendering equation.

I had the same idea some time ago as well, but actually there's no need to encode the screen spaced approximated visibility function into SH basis. They do it because they need to pass over the function from one pass to the other (DSSDO -> Lighting).

It would be way better, if you just sampled the screen space along the direction of the light and lerp the lights color with the occluders color based on how sure the algorithm is, that the occluder is actually occluding the light. This way there's no need for any spherical harmonics, as you have the full screen space approximated visibility function (technically it's more than just a visibility function, as you can get chromatic information) and you can use that and put it into your rendering equation.

I've done that for one light, and AFAIK Crysis 2 does it as well, just for the sun (maybe only on characters for finely detailed self shadowing?) -- but just with occlusion, not colouring based on the occluder's colour.

It gets pretty expensive if you do it on every light though, because you want at least 4, and preferably many depth samples to make an accurate occlusion decision (I got decent results with just 6 samples).

With the SH method, you get more approximate results, but it scales to many lights very well.


float3 diff = centerPosition - samplePosition;
float3 dir = diff * rsqrt(dot(diff,diff));
float ssdo = dot(dir, u_lightDirection);
ssdo = ambientOcculusionValueForThisSample * step(ssdoTreshold, ssdo);

My ssdo method for sunlight try to reuse ao samples. Sample position and direction is computed for ao sample already so only thing I have to do is dot against light direction and weight it. SSAO is bandwith limited so this is almost free and hide VSM shadow leaking almost completely.

This topic is closed to new replies.

Advertisement