Source engine shading model

Started by
17 comments, last by cseger 20 years ago
quote:I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Valve computes three colors per lumel. Those colors are computed using standard radiosity and only ONE surface normal per lumel is used. But the three colors stored in the lightmap are not computed using this surface normal. Instead, the lightmap uses the three basis vectors transformed into tangent space. So valve simply replaces the surface normal with three normals when computing the final lightmap color(s). This doesn''t impact radiosity at all.
Advertisement
quote:Original post by Brandlnator
quote:I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Valve computes three colors per lumel. Those colors are computed using standard radiosity and only ONE surface normal per lumel is used. But the three colors stored in the lightmap are not computed using this surface normal. Instead, the lightmap uses the three basis vectors transformed into tangent space. So valve simply replaces the surface normal with three normals when computing the final lightmap color(s). This doesn''t impact radiosity at all.

I don''t follow you. How do they turn one colour at a lumel into three without changing the way they calcluate the radiosity? Normally the lightmap colour is just the result of the radiosity calculation at that lumel. How can you take one colour and three ''normals'' and turn it into three colours?

Game Programming Blog: www.mattnewport.com/blog

Matt,
I don''t follow that explanation either. Since the radiosity method only deals with diffuse surfaces, the radient exitance is reflected equally in all directions and mainly depends on the patch normal. Therefore when looking at the cave screenshots (for each component) I can see no way such of getting such varying exitant radiences without actually changing the patch normal according to the basis vectors as you suggested.

But then, how do they refine the radiosity solution for a single basis vector. I mean what normal vectors are all the other patches using? It seems wrong to do the three refinement passes, one where all pacthes are using the same basis vector.

I just getting into GI too (doing my first MC sampling in my raytracer a few days ago) and I''m unsure about a lot of GI issues. One thing that has helped me getting started is the Global Illumination Compendium:
http://www.cs.kuleuven.ac.be/~phil/GI/
(in case you didn''t knew it ;-)
Glad to hear I''m not the only one confused by this... Thanks for that link - it looks useful.

Game Programming Blog: www.mattnewport.com/blog

quote:Original post by mattnewport
Glad to hear I''m not the only one confused by this... Thanks for that link - it looks useful.


When you calculate the contribution of one lumel to other, you have two lumels -> you have a direction of which that energy came to to lumel shaded. Split that energy, using the direction to the second lumel with the tangent basis of the first one (the lumel being lighted) and viola - you get three values.
Here''s how I understand it...

They use that basis/3 colours as an accumulation of all incoming lights at that lumel.

So lets say we have 3 lights topRight(0.5,0.5,0), top(0,1,0), and frontTop(0,0.5,0.5). That gets accumulated and stored in the light map as (0.5, 2.0, 0.5).

Then at rendering time, it''s only one pass, if a pixel normal in the normal map is (1.0,0,0), the shade becomes 0.5. If the pixel normal is (-1.0,0,0) the shade becomes 0.0 since there''s no incoming light in that direction.

I think this has mostly to do with using larger lightmap pixels (res: ~0.25 metres) on high frequency normal maps (res: ~1 cm).

Pretty clever idea...
In slide 39, where they show all the textures used to render the cave example, there seems to be 3 _RGB_ lightmaps : one per basis vector.

That would make sense to store color instead of luminance, as it allows you to render the color bleeding effect.
quote:Original post by Zemedelec
quote:Original post by mattnewport
Glad to hear I''m not the only one confused by this... Thanks for that link - it looks useful.


When you calculate the contribution of one lumel to other, you have two lumels -> you have a direction of which that energy came to to lumel shaded. Split that energy, using the direction to the second lumel with the tangent basis of the first one (the lumel being lighted) and viola - you get three values.

That sounds like what overnhet was saying which makes sense (though I''m still not convinced it''s the ''right'' thing to do mathematically). Brandlnator was saying the technique didn''t impact radiosity at all which I didn''t understand. With the method you describe you need to keep three values at every iteration when calculating the radiosity which does impact on your radiosity calculation.

Game Programming Blog: www.mattnewport.com/blog

quote:Original post by mattnewport
With the method you describe you need to keep three values at every iteration when calculating the radiosity which does impact on your radiosity calculation.


Yes, you need to keep three values, and the classical radiosity is just the sum of that three values. They just separate them, basing that separation on the direction of coming energy in each lumel.
So, during the process 3 lightmaps are used and just the radiosity transfer function is altered, to support 3 lightmaps and to separate incoming energy into 3 values. Separation is made at the energy arrival, but when the energy are emitted, it is single value - the sum of the 3 components.

This topic is closed to new replies.

Advertisement