Screen-Space SH Lighing upsampling

Started by
26 comments, last by David Neubelt 11 years, 3 months ago
Advertisement
n00body
Author
347
May 19, 2010 04:55 PM
Background: I have been experimenting with a new type of deferred lighting renderer that uses Spherical Harmonics, rather than raw RGB data. The big advantage of this approach is that I can do my lighting pass at a lower resolution, then do my material pass at a higher resolution. Thanks to the SH data, normal mapping in the material pass will produce accurately lit high-resolution surface details. For my implementation, I am using a low-resolution prepass that stores normalized linear eye depth in an fp16 target. Then I do a low-resolution lighting pass that accumulates 1st order SH coefficients in 3xfp16 buffers. Finally, I do a material pass where I sample the SH coefficients from each buffer, and apply them per-pixel. For more information on the topic, please refer to the provided links: Links: [1] Solid Angle: Screen Space Spherical Harmonic Lighting [2] Dead Voxels: Has someone tried this before? Problem: Because I am sampling the lighting information from lower-resolution buffers, I am getting haloing artifacts around the edges of my models. I know that I need a smarter way of sampling the buffers to compensate, but have concerns about needing lots of samples from multiple floating point buffers. Questions:
  1. Is Bilateral Upsampling the only way to deal with this problem?
  2. Is there a simpler way that wouldn't require so many samples and so much interpolation?
Examples: (SSSHL) no upsample (SSSHL) upsample Figure 1. 1, no filtering; 2, Linear filtering Thanks for any help you can provide.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

ArKano22
May 19, 2010 07:06 PM
http://graphics.cs.uiuc.edu/~kircher/inferred/inferred_lighting_paper.pdf

The dsf filter described there could help you.
Hodgman
52,704
May 19, 2010 07:20 PM
Yeah that looks like the exact same problem that they address in the inferred lighting paper.

In my implementation of their DSF filter, I:
1) sample the nearest 4 texels in the LBuffer/DepthBuffer (no filtering) and compute the appropriate weights as if you were implementing linear filtering yourself.
2) for each of those 4 samples compare their depth against the fragment's depth (interpolated from the vertex-shader). If the difference is beyond a certain threshold, then reuce this sample's weighting to zero.
3) re-normalise the weights so they add up to 1.

Alternatively, instead of (or as well as) using the depth threshold, in your initial "eye depth" pass, you can write object IDs to one channel. Then in step 2, reject any LBuffer samples that don't match that object's ID.

There's one caveat with this filter - you've got to account for the case where all 4 samples fail the depth-threshold/ID test. If that happens, then you just give up on DSF and use regular linear filtering (i.e. just use the weights computed in step 1).

[EDIT] BTW, this SSSHL/SHLPP/whatever-you-want-to-call-it is a really interesting idea, as it solves the low-res normal-detail problem of inferred =D

[EDIT #2]
Solid Angle actually mentions depth-threshold DSF:
Quote:I found that when you upsample the lighting buffer during the apply lighting stage naively, you would get halos around the edges of objects. I fixed this using a bilateral filter aware of depth discontinuities.
and Dead Voxels mentions ID-based DSF:
Quote:In fact, since the lighting is independent of things like normal discontinuities, you might even be able to get away with ignoring edge discontinuities. Or you could probably work around this using an ID buffer constructed later
So it's likely they're using something simmilar to the above ;)

[Edited by - Hodgman on May 19, 2010 8:20:50 PM]
n00body
Author
347
May 19, 2010 08:41 PM
Yeah, I got that part. The thing that concerns me is that that means I would have to take 12 samples per-pixel to perform my own interpolation. That seems like an awful lot to do for every material, so I was seeking any possible alternatives.

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Hodgman
52,704
May 19, 2010 08:54 PM
Ah, yeah, the fat LBuffer.. Actually you'd have to sample the ID/Depth buffer too so it'd be 16 samples for the filtering :/

If your depth buffer is single-component, then depending on the API / hardware you're targetting you can perform the 4 depth-samples at once using Gather (DX10.1/11) or Fetch4 (DX9 on ATI), which would make 13 samples intead of 16.

It might not be too bad though, because the 4 samples within each texture are right next to each other, so while the first one might cause a cache-miss, the following 3 are sure to be extremely quick.
n00body
Author
347
May 20, 2010 01:02 AM
I'm starting to get the impression that the performance benefits gained from using low-resolution lighting buffers will ultimately be lost on the upsampling step. :(

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Viik
252
May 20, 2010 05:04 AM
Hmm, why you need to take 12 samples for upsampling? I think, you need only 4. Your full resolution fragment will be somewhere inside one of the low res texels. Check in which quarter of that texels it positioned and read four adjustent texels, then check them with DSF ID and interpolate proporitonaly to position of high res fragment in low res texel.

[EDIT] I'm using DSF right now, but I've droped using low res lighting buffer, it's screwing lighting caluclated for high frequency normal maps. On other side if perfomance is the main concern then it's definately a win even with upsampling. Upsampling performance is irrelevant to number of lights that you use, but f you have really a lot of lights using low res buffer might give quite a good boost to performance.

[EDIT] I forgot that you interpolating SH coeficients so 4 samples is not an option. How about doing a pass and calculate all lighint from SH after coefficient accumulated and then upsample? You will loose some details in normal maps, but if performance is main concern it might work.

[Edited by - Viik on May 20, 2010 5:04:53 AM]
n00body
Author
347
May 20, 2010 10:40 AM
The reason I wanted to try SSSHL was so that I could get high-resolution normal mapping detail and the ability to control how the lighting is applied in the material stage. If I calculate & upsample the normal mapped lighting before the material stage then how would this be any different from regular Inferred Lighting?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Viik
252
May 20, 2010 02:08 PM
It won't be much different, but what's stopping you from using different materials in inferred render setup? Without DSF it's similar to light-prepass, use one of the g-buffer channels for material ID, single value can be use for material type as well as material parameters.
n00body
Author
347
May 20, 2010 02:42 PM
You're missing the point. The idea was to accumulate lighting in the SH buffers, and then let the objects decide how to apply it in their materials. By storing lighting in a basis that accounts for multiple directions, I can do more interesting tricks with it than I could with a Deferred/Inferred Lighting renderer.

For example, it would be much easier to do things like parallax effects, or materials with multiple layers that have their own independent lighting. I could also extract the dominant light direction & color to apply custom BRDFs per object.

Basically, I could avoid the nastiness of material IDs and splitting material properties between two passes. The materials themselves could determine how they are lit. Now can you see why calculating the normal-mapped lighting beforehand defeats the purpose of this system?

[Hardware:] Falcon Northwest Tiki, Windows 7, Nvidia Geforce GTX 970

[Websites:] Development Blog | LinkedIn
[Unity3D :] Alloy Physical Shader Framework

Share:

This topic is closed to new replies.

Advertisement