Spherical Harmonics and Lightmaps

Started by
19 comments, last by Hodgman 6 years, 2 months ago

Sorry for the late reply!

On 1/22/2018 at 5:22 AM, MJP said:

I'm sure plenty of games still bake the direct contribution for at least some of their lights. We certainly did this for The Order, and did it again for Lone Echo. Each of our lights has flags that control whether or not the diffuse and indirect lighting is baked, so the lighting artists could choose to fully bake the light if was unimportant and/or it would only ever need to affect static geometry.

Fair enough, I'll try break down the granularity of choice to your level.

On 1/22/2018 at 5:22 AM, MJP said:

We also always baked area lights, since we didn't have run-time support for area lights on either of those games.

I'll look into it eventually. I'm sorry to say I haven't played The Order or Lone Echo (Don't have a Playstation for the former, and I'm pretty broke either way :P), but I'm pretty sure The Order doesn't have much requirements for area lights given its setting. I'm hoping to generalize my engine a bit so I'll probably want to handle it eventually, but it's not a priority right now.

On 1/22/2018 at 5:22 AM, MJP said:

For the sun shadows we also bake the shadow term to a separate lightmap. We'll typically use this for surfaces that are past the last cascade of dynamic runtime shadows, so that they still have something to fall back on.

I'm not sure I agree with this method, though I haven't tried it out, so I can't say that much about it.

On 1/22/2018 at 5:22 AM, MJP said:

if you want to to compute a specular term then you need to store a radiance distribution in your lightmap

I don't get what you mean by this. Specular Irradiance is very rarely handled in lightmaps as far as I'm aware, I don't really see a point to this.

On 1/22/2018 at 5:22 AM, MJP said:

If you want SH but only on a hemisphere, then you can check out H-basis. It's basically SH reformulated to only exist on the hemisphere surrounding the Z axis, and there's a simple conversion from SH -> H-basis. You can also project directly into H-basis if you want to. I have some shader code here for projecting and converting. You can also do a least-squares fitting on SH to give you coefficients that are optimized for the upper hemisphere. That said I'm sure you would be fine with the Last of Us approach of ambient + dominant direction (I believe they kept using that on Uncharted 4), but it's nice to know all of your options before making a decision

Sounsd quite interesting! I recently found a Call of Duty paper breaking down their use of hemispheres in lightmaps (among other things), so I'll check out all these sources a bit later, when I've actually got a preliminary lightmap working.

On 1/22/2018 at 5:22 AM, MJP said:

You don't necessarily have to store directions for a set of SG's in a lightmap. We assume a fixed set of directions in tangent space, which saves on storage and makes the solve easier. But that's really the nicest part of SG's: you have a lot of flexibility in how you can use them, as opposed to SH which has a fixed set of orthogonal basis functions. For instance you could store a direction for one SG, and use implicit directions for the rest. 

Very good point I haven't considered.

-------

On a separate note, there's one thing I can't figure out. How can I use multiple irradiance probes (SH) on a single mesh? Do I have to pass an static-sized-array of SH primitives? I'm worried that'll take up far too much memory.

Advertisement
11 minutes ago, KarimIOAH said:

I don't get what you mean by this. Specular Irradiance is very rarely handled in lightmaps as far as I'm aware, I don't really see a point to this.

If you want glossy surfaces to look right, you need to be able to reproduce both diffuse and specular lighting for lightmapped surfaces. Sure, using probes for reflections is common, but not suitable everywhere. IIRC, The Order used probes for extremely glossy surfaces and lightmaps for everything else, as they have better spatial density than probes. COD (black ops?) "normalises" their probes by dividing by the average color, and then recolours them using lightmap data to make it seem like their probes are higher density than they really are. Many games retrieve the dominant lighting direction / colour and use it to do a specular highlight from a fake directional light. Unity has a light baking mode that stores the dominant lighting direction for the same purpose. With the HL2 basis you can use a weighted average of the three basis vectors/lightmap values to get a fake specular light direction. On an early PS3 game we stored the direction to the closest baked point light in the lightmap for doing specular. At the least, you would probably want some kind of specular occlusion value in your lightmaps these days to modulate your probes with. 

20 minutes ago, KarimIOAH said:

On a separate note, there's one thing I can't figure out. How can I use multiple irradiance probes (SH) on a single mesh? Do I have to pass an static-sized-array of SH primitives? I'm worried that'll take up far too much memory.

That's a similar problem of how to use many lights on one mesh. You can break the screen into tiles and store a list per tile. You can put them all in a big buffer and store indices per vertex, or just pre-bake/blend them into the vertices. You can store a list of them per mesh in a cbuffer. You can put them in a 3D volume texture and do linear filtered samples. 

12 minutes ago, Hodgman said:

If you want glossy surfaces to look right, you need to be able to reproduce both diffuse and specular lighting for lightmapped surfaces. Sure, using probes for reflections is common, but not suitable everywhere. IIRC, The Order used probes for extremely glossy surfaces and lightmaps for everything else, as they have better spatial density than probes. COD (black ops?) "normalises" their probes by dividing by the average color, and then recolours them using lightmap data to make it seem like their probes are higher density than they really are. Many games retrieve the dominant lighting direction / colour and use it to do a specular highlight from a fake directional light. Unity has a light baking mode that stores the dominant lighting direction for the same purpose. With the HL2 basis you can use a weighted average of the three basis vectors/lightmap values to get a fake specular light direction. On an early PS3 game we stored the direction to the closest baked point light in the lightmap for doing specular. At the least, you would probably want some kind of specular occlusion value in your lightmaps these days to modulate your probes with. 

Oh alright, I get what you're talking about, but it took me until halfway through the paragraph :P It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

19 minutes ago, Hodgman said:

That's a similar problem of how to use many lights on one mesh. You can break the screen into tiles and store a list per tile. You can put them all in a big buffer and store indices per vertex, or just pre-bake/blend them into the vertices. You can store a list of them per mesh in a cbuffer. You can put them in a 3D volume texture and do linear filtered samples. 

Dammit. I was looking forward to programming the SH solution until now :P So far I'm not using a tile- or cluster- based deferred solution, just basic deferred rendering, so the first suggestion would be further down the line. I can't see the second suggestion working on anything with little tessellation. The third suggestion sounds very restrictive, and one of the main reasons we abandoned Forward lighting in general. The last suggestion is also quite restrictive, as I don't want to be reliant on uniform probe grids. But it's probably the suggestion I'll try until I can get the first one working.

1 hour ago, KarimIOAH said:

Oh alright, I get what you're talking about, but it took me until halfway through the paragraph :P It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

 

Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise.

Doing all this helps correlate your diffuse color to your specular response, which will become uncorrelated the more incorrect your cubemaps become.

BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf

The idea is to voxelize your level into a sparse voxel grid, then place SH (or SG/whatever) probes in each relevant grid point. The overall spatial resolution is less than a lightmap, but it's much easier to change up the lighting in realtime, and uses the same exact lighting terms for static and dynamic objects. It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting. 

2 hours ago, KarimIOAH said:

just basic deferred rendering,

You can treat your probes like typical deferred point lights in that case. To solve the issue where multiple of these "ambient point lights" overlap, you can have them add 1.0f into the alpha channel and then divide the lighting buffer by alpha after drawing all the ambient lights but before any normal lights. 

2 hours ago, KarimIOAH said:

It's also required for normal mapping, though, right? But my confusion is what it has to do with specular specifically, and glossy surfaces at all.

Yeah even just the addition of normal maps means that you suddenly need some form of advanced light mapping, unless your normal maps are lower resolution than your lightmaps.... 

So, at one extreme, you can imaging storing a full light probe at each texel of the lightmap - obviously not feasible, but does let you do correct lighting. At the other extreme, you reduce all of those probes to a single RGB value - this lets you have correct diffuse as you can do the cosine weighting during baking (assuming no normal maps!), but does not let you do specular at all. All of the above techniques are middle grounds that try to allow for normal maps (normal not known at baking time) and specular (need to be able to integrate Irradiance in different sized/oriented cones at runtime). 

15 hours ago, FreneticPonE said:

Cubemaps are only sampled from one spatial point, maybe 2 or so if you're blending across. Say, an H basis lightmap would sample light from each texel. You just contribute whatever specular response you can from your spherical harmonics to help with the fact that the cubemap is almost certainly going to be some level of incorrect. For rough serfaces the entire specular response can come from the lightmap, and thus (except for dynamic stuff) be entirely correct, position wise.

But those issues can be mitigated by using features like parallax cubemapping and SSAO. Not fully, to be sure, but when it's the high frequency data that matters, I don't see how cramming more low-frequency data in can help all that much. And how would they even be combined?

15 hours ago, FreneticPonE said:

BTW if you're curious I'd consider "state of the art" to be Remedy's sparse SH grid used in Quantum Break: https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf

I skimmed through most of this and it's quite interesting so far. 

15 hours ago, FreneticPonE said:

It might not seem intuitive, but having a uniform response for lighting across all objects gives a nice look compared to the kind of disjointed look you get out of high detail lightmaps being right next to dynamic objects with less detailed indirect lighting. 

That was my thought process for the most part but I think many are able to combine lightmaps and SH quite well.

14 hours ago, Hodgman said:

ou can treat your probes like typical deferred point lights in that case. To solve the issue where multiple of these "ambient point lights" overlap, you can have them add 1.0f into the alpha channel and then divide the lighting buffer by alpha after drawing all the ambient lights but before any normal lights. 

Sounds interesting! I'll try this out then. Do you have any articles about this technique?

Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is. Lightmaps are ultra high frequency spatial data, even if angular data is low frequency it can still be more correct than a cubemap, no matter how many tricks you pull. And SSAO only works with onscreen data, and only works for darkening things.

Most modern SH/SG lightmaps are used to somewhat correct or supplement cubamaps.

17 hours ago, KarimIO said:

Sounds interesting! I'll try this out then. Do you have any articles about this technique?

Check out this one. He even supports runtime dynamic updates to the probes in a very efficient manner (only updating lighting without re-rendering the probes, via cubemap GBuffer caching) 

http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/

8 hours ago, FreneticPonE said:

Cubemaps only offer low frequency spatial data, ultra low frequency no matter how much angular frequency they offer. Invariably the farther away from the sample, or rather if it's just behind a pole or something, the less correct the data will be no matter how high a resolution it is.

Check out this neat extension to probes to achieve awesome spatial resolution (at quite some expense...) 

http://graphics.cs.williams.edu/papers/LightFieldI3D17/

http://casual-effects.com/research/McGuire2017LightField/index.html

13 hours ago, Hodgman said:

Check out this neat extension to probes to achieve awesome spatial resolution (at quite some expense...) 

http://graphics.cs.williams.edu/papers/LightFieldI3D17/

http://casual-effects.com/research/McGuire2017LightField/index.html

Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.

2 hours ago, FreneticPonE said:

Oof, I remember that second one. At that point more traditional pathtracing is just as fast or faster, doesn't have any missing data problems, and would probably use less memory as there'd be no multiple copies of the same data.

For complex scenes it gets expensive, but they do sponza in half a millisecond (on good HW of course). In this same future direction, there's also this one, which is a great marriage of both "light-maps" and probes:

https://users.aalto.fi/~silvena4/Projects/RTGI/index.html

This topic is closed to new replies.

Advertisement