Yeah that looks like the exact same problem that they address in the inferred lighting paper.
In my implementation of their DSF filter, I:
1) sample the nearest 4 texels in the LBuffer/DepthBuffer (no filtering) and compute the appropriate weights as if you were implementing linear filtering yourself.
2) for each of those 4 samples compare their depth against the fragment's depth (interpolated from the vertex-shader). If the difference is beyond a certain threshold, then reuce this sample's weighting to zero.
3) re-normalise the weights so they add up to 1.
Alternatively, instead of (or as well as) using the depth threshold, in your initial "eye depth" pass, you can write object IDs to one channel. Then in step 2, reject any LBuffer samples that don't match that object's ID.
There's one caveat with this filter - you've got to account for the case where all 4 samples fail the depth-threshold/ID test. If that happens, then you just give up on DSF and use regular linear filtering (i.e. just use the weights computed in step 1).
[EDIT] BTW, this SSSHL/SHLPP/whatever-you-want-to-call-it is a really interesting idea, as it solves the low-res normal-detail problem of inferred =D
[EDIT #2]
Solid Angle actually mentions depth-threshold DSF:
Quote:I found that when you upsample the lighting buffer during the apply lighting stage naively, you would get halos around the edges of objects. I fixed this using a bilateral filter aware of depth discontinuities.
and Dead Voxels mentions ID-based DSF:
Quote:In fact, since the lighting is independent of things like normal discontinuities, you might even be able to get away with ignoring edge discontinuities. Or you could probably work around this using an ID buffer constructed later
So it's likely they're using something simmilar to the above ;)
[Edited by - Hodgman on May 19, 2010 8:20:50 PM]