Jump to content
  • Advertisement
Sign in to follow this  
Epaenetus

Questions on Baked GI Spherical Harmonics

This topic is 961 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In the next week or so, I'm going to start implementing Baked GI for my custom engine. I'm not too familiar with Spherical Harmonics(SH), but I'm planning on doing some more research to understand the math behind applying SH to GI.

 

For Baked GI my plan is:

  1. Offline, for each light probe in the scene
  2. At run-time, I linearly interpolate the SH coefficients for an object on the CPU.
  3. Finally, I feed the SH coefficients to the GPU for each object. (This is done when I am creating the GBuffer.) 

Questions:

  • Is this a viable method to Bake GI and use it at run-time? I've looked through some old posts on this forum and this is what I could piece meal together; but I maybe completely missing the ball. rolleyes.gif
  • How do I interpolate between the SH coefficients (this is kind of a multi-part question):
    • In Unity, it appears that they find the smallest fitting simplex of light probes that contains the center of the object.    LightProbes-characterandprobes.png    
    • How do I find that smallest fitting simplex? 
    • Once I find probes to interpolate between, can I just use bilinear/trilinear interpolation on each of the coefficients?
  •  Is there a way to defer applying the SH coefficients?

 

I'm currently only looking at Baking GI as most of the Dynamic GI that I've researched is out of my breadth at the moment.

 

Thanks for reading all that!

Edited by Epaenetus

Share this post


Link to post
Share on other sites
Advertisement

Just a quick semi answer whilst glancing over your question.....

Here's the GDC presentation by Robert Cupisz

http://gdcvault.com/play/1015312/Light-Probe-Interpolation-Using-Tetrahedral

And a link to his website he's reposted the slides and a video with some more info.
http://robert.cupisz.eu/post/75393800428/light-probe-interpolation

Thanks for the references!

 

 

 

 


Another possible approach is just store samples in a 3D grid, or multiple 3D grids. Grids are super simple for looking up and interpolating, since they're uniform.

Thanks for the suggestion. I actually may try going this route to start off with. How do you handle blending between two 3D grids?

 

 

 


The approach that you described is definitely workable, and has been successfully used in many shipping games. For the conversion from cubemap->SH coefficients, there's some pseudocode in this paper that you might find useful. You can also take a look at the SH class from my sample framework for a working example of projecting a cubemap onto SH, as well as the code from the DirectXSH library (which you can also just use directly in project, if you'd like).

These will help me out a ton thanks!

 

 

 


Remedy (Quantum Break) dealt with this nicely by using essentially a sparse multi-scale grid: http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf

Thanks for the alternate approach. I'll probably look more into this after I get a basic version of Baked GI using Uniform Grids working.

Edited by Epaenetus

Share this post


Link to post
Share on other sites

Thanks for the suggestion. I actually may try going this route to start off with. How do you handle blending between two 3D grids?


We didn't. Every object sampled from only one grid at a time, which was chosen by figuring out the closest grid with the highest sample density. There were never any popping issues as long as there was some overlap between the two grids, and an object could be located inside of both simultaneously.

Share this post


Link to post
Share on other sites


Each grid would then be stored in a set of 3D textures, which would be bound to each mesh based on whichever OBB the mesh intersected with

I'm attempting to go this route in terms of using the SH coefficients, but I am still unclear on some details. Would this approach mean that I would need to store all 27 SH coefficients (as I need one for each channel) into 3D textures? If so, then I can probably store them all into 7 to 9 3D textures. Am I on the right track?

Share this post


Link to post
Share on other sites
Yes, you'll either need to use multiple textures or atlas them inside of 1 large 3D texture (separate textures is easier). It would be a lot easier if GPU's supported 3D texture arrays, but unfortunately they don't.

Share this post


Link to post
Share on other sites

So it took awhile because of the holiday, but I finally got around to implementing everything. Now I am just trying to make sure everything is correct.

 

Here is a picture of what I have so far:

49SFLp4.png

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

Share this post


Link to post
Share on other sites

 

Robin Green's paper was super helpful. A lot of it still went over my head, but I've been able put together a few things.

 

I have a 3D grid of light probes like MJP suggested. I'm rendering a cubemap for every light probe and processing the cube map to construct 9 SH coefficients from it for each color channel.  When rendering the cubemap, I apply some ambient lighting to every object in order to account for objects in shadow. (I wasn't to sure about this one)

 

I'd like to try attempting to get the nice occlusion that Ready At Dawn has in their Siggraph presentation pg. 18 (http://blog.selfshadow.com/publications/s2015-shading-course/rad/s2015_pbs_rad_slides.pdf). How do I get something like this?

 

I'm also wondering if anything looks really wrong with my current implementation. 

 

Generally an offline raytracer is used for baking indirect illumination, rather than just an ambient term. Shoot rays with bounces all around and gather. Also what exactly do you mean "occlusion" like RaD? What occlusion specifically?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!