First we calculate on the CPU the expected depth value of the light source, and it's screen space position:
N.B. all untested psuedo-code as example
Vector4 lightData = Vector4( light_position, 1 );//get the light's world position, make sure that 'w' is 1.0 lightData = ViewProjectionMatrix * lightData;//multiply with the view-projection matrix lightData = lightData / lightData.w;//perform perspective division lightData.x = lightData.x*0.5+0.5;//the x/y screen coordinates come out between -1 and 1, so lightData.y = lightData.y*0.5+0.5;//we need to rescale them to be 0 to 1 tex-coordsNext we render a quad that completely covers a small 16x16 pixel render-target. The quad should have UV's from [0,0] to [1,1] like a regular full-screen quad would. We also bind the depth-buffer as a texture, and put the above lightData in a shader constant.
In the vertex shader, assuming the input UV's are from [0,0] to [1,1], we can calculate the output UVs like:
float2 pixel = float2( 1/1280.0, 1/720.0 );//adjust depending on the resolution of your depth buffer! Out.UV = lightData.xy + (In.UV - 0.5) * pixel*16;//the quad will take depth samples in a 16x16px region of the depth buffer, centred around the light positionIn the pixel shader, we can then test each depth-buffer value against our predicted light depth, and output the result of the depth test as white or black.
float depth = tex2D( depthBuffer, In.UV ).r; float test = step( lightData.z, depth ); // test = lightData.z < depth ? 1.0 : 0.0 return test.xxxx;Now the 16x16 render-target contains 256 boolean values, indicating whether the light is in front or behind that pixel nearby it.
If we now generate mip-maps for this render-target, then mip-level #4 will be a 1x1 texture, containing the percentage of visible pixels.
In your actual lens flare shader, you can sample this texture to find out how much to scale/fade the flare effect, e.g.
float visibility = tex2Dlod( occlusionBuffer, float4(0.5,0.5,4,0) ).r;