# What's the advantage of Spherical Gaussians used in The Order vs. Enviroment Map?

This topic is 936 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Reading the Ready at Dawn paper i see SG has some interesting properties:

Each lobe could have individual width and direction.

But in practice those advantages are lost:

Direction and width are hardcoded to cover the hemisphere uniformly.

They use 9 coefficients, so by comparing this with a 3x3 envoriment map:

Uniform coverage is given by default.

Sampling a reflection direction needs to touch only 4 values, not all 9 - and it can be done in hardware with bilinear filtering.

(4x4 Envmap with mipmaps would be even nicer)

So i totally do not understand why SG has been chosen. What do i miss here?

##### Share on other sites

For every texel in The order: 1886's lightmap, it is a SG including 9 lobes. Note that for one SG lobe, it needs 3 parameters: dir (2 scalars) , radiances(3 scalars), width(1 scalar), so we need 6 scalars to represent one lobe. In practice, The order:1886 hardcoded the dir and width for every lobes to cover the sphere uniformly. So they only need 9 (lobes) x 3 (radiances) = 27 scalars for one texel. Note that these 27 co-effs for lobes could be HDR values.

1. When doing the baking, the 3x3 env map (3x3x6=36 scalars) for one texel is not easy to save into the lightmaps, compared with only 27 scalars for SG.
2. And also note when sampling the env-map, the bilinear filtering could not cross the boundary of the facet. In fact the SG is more suitable to represent the spherical function for interpolated evaluation than env-map.
3. When doing the lighting evaluation, the SG representation is also far more efficient that a env-map by exploiting the orthogonality of its basis, according to the course of The order: 1886.
4. I also notice that The order used the SG mostly for rough materials. So in my limited opinion, as long as it is better than SH & H-basis, it is a win...

All above is the reason that I think SG is better for a env-map... :) Sorry for my rush reply...

Edited by neoragex

##### Share on other sites

I don't mean a cube map, i mean a classical enviroment map (looking like a photo of a mirror ball).

That's 9 texels vs. 9 lobes - thus equal storage (of course it covers only the hemisphere).

Note that for one SG lobe, it needs 3 parameters: dir (2 scalars) , radiances(3 scalars), width(1 scalar), so we need 6 scalars to represent one lobe. In practice, The order:1886 hardcoded the dir and width for every lobes to cover the hemisphere uniformly.

That's what i said and because dir and width are hardcoded any potential advantage is lost.

With a env map you don't need to test against 9 hardcoded dirs, you just convert the reflection vector to uv like this:

float fdim = 0.5f * float((MAX_RES>>mip_level));
float uv = (1.0f + localReflectionVector.xy) * fdim - 0.5f;

So envmap should be much faster with the same quality.

##### Share on other sites

So, you're suggesting a lightmap, where each "texel" of the lightmap is actually 3x3 texels, containing an environment map (I'm guessing traditional spheremap)?

First off, those traditional maps are circular, so you'd have to square the circle to avoid wasting your corner texels, plus a 3x3 spheremap would have one "straight up" sample and a ring of 8 "down and outwards" samples, which is not a very uniform sphere coverage.

At the end of the day, you're still storing 9 light values with 9 hard-coded directions.

The difference is that your 9 directions are an arbitrary choice stemming from your choice of env-map layout, and in the SG method they were able to very carefully pick their sample directions and the lobe widths of those samples, in order to get the best amount of detail out of this small sample count as they could.

Also, IIRC, their 9 samples are actually 18 directions: + and - along each sample direction, where the direction that's above the local horizon is the one used. I might've imagined that bit tho...

So, you're trading efficiency for quality.

Edited by Hodgman

##### Share on other sites

So, you're suggesting a lightmap, where each "texel" of the lightmap is actually 3x3 texels, containing an environment map (I'm guessing traditional spheremap)?

Yes, that's exactly what i'm doing, in realtime, with infinite bounces and anything dynamic.

It will take me months to port from CPU to compute, but i'm in good hope it's fast enough for consoles.

Main downside is memory requirements, need to uv atlas everything, and temporal lag depending on performance.

About the lost area in the corners of the spheremap (yep, thet's a good term :) ),

I have code that calculates the exact coverage of each texel, but i can run it only at powers of 2 resolutions.

For a 4x4 map, the coverage is only 31.5% for a corner texel, 91.3% for the next neighbour and 100% for a diagonal neighbour.

Weighting this by hand, (3*31.5 + 2*3*91.3 + 1*100)/16, i still get an approximate coverage of 46% of a 3x3 corner texel.

For a 8x8 spheremap the corner texels are wasted but even there i'd accept that.

I was thinking about a remapping like in your shader toy, but i'm pretty sure the distortion at the diagonals would look too discontinuous.

The difference is that your 9 directions are an arbitrary choice stemming from your choice of env-map layout, and in the SG method they were able to very carefully pick their sample directions and the lobe widths of those samples, in order to get the best amount of detail out of this small sample count as they could.

The spheremap layout is pretty clever. It's cosine weighted (sum up and average all texels to get perfect diffuse lambert reflection) and it has more reflection detail at grazing angles.

That's exactly what we want, isn't it? So why try to handpick something that could be defined by solid (and fast) math?

However, because of the regular distribution i get some banding artefacts on perfect mirror material but i guess in practice some real world bumps can hide that well enough.

(A better filter than bilinear could hide it too)

I assume SG has an advantage here. Maybe that's the reason, but i guess it's a very small advantege at a high cost.

I was hoping there is something that i totally missed, but i guess we've covered anything related already.

##### Share on other sites

"Real time"? "Infinite bounces"? "Dynamic"? I'm completely lost as to how environment maps gets you anything like that over spherical guassians/harmonics. But generally SG/H is used both for memory requirements and because it's easy to fit to arbitrary BDRFs.

##### Share on other sites

I don't use the spheremap to calculate GI, just to store the results. (Others do, see the Many Lods paper for an example)

The reason for my question is: I want to make sure it is not worth it to try SG for my purpose (tried only SH).

In my opinion it makes sense to use SG/H instead of cubemaps for a volume, but it makes little sense to use them if you work at the surface.

##### Share on other sites
Try this: what are the 3D vectors given by the 2D texture coordinates of the 3x3 map? How uniform is their distribution over the local hemisphere?
i.e.
(0.5/3, 0.5/3) (1.5/3, 0.5/3) (2.5/3, 0.5/3),
(0.5/3, 1.5/3) (1.5/3, 1.5/3) (2.5/3, 1.5/3),
(0.5/3, 2.5/3) (1.5/3, 2.5/3) (2.5/3, 2.5/3)
Also,: how do you convolve your env-map with your BRDF? Edited by Hodgman

##### Share on other sites

Agreed. It may be a problem. The spheremap seems like some very raw representation for texel's radiance. I can't see its advantage over an proficient spherial representation, except for its not-that-accurate effciency. :wink:

##### Share on other sites

I made this pic to show what area of the hemisphere is covered by each texel for 4x4:

We see texels at the edge cover more area, because samples from those angles contributes less to diffuse.

But at the same time the detail at the edge is best, good for speccular felections.

So that's a nice distribution.

But i agree the inital grid remains visible and the golden ratio spiral used in The Order probably does a better job in hiding the underlying low sample count,

trading low frequency noise against low frequency banding.

I'm just willing to accept this because my goal is mainly dynamic lighting. Of course i can't compete against precomputation in terms of detail.

For the BRDF, to approximate a cone you can pick a trilinear sample like you would do with a cube map.

Currently i don't want to spend time on PBR shaders to proof how it'll really look,

but from what i see now 4x4 is enough to cover most real world materials including metal.

1. 1
2. 2
3. 3
Rutin
14
4. 4
5. 5

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633674
• Total Posts
3013277
×