Archived

This topic is now archived and is closed to further replies.

cseger

Source engine shading model

Recommended Posts

Did you guys read this: http://www.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf ...about Valve''s Source engine shading model. Seems like a good solution to me, however, "designer-placed sample points for specular cubemaps"... I guess it means that they only put the spec-cubemaps where really necessary, instead of automating it, so they save some tex-mem... What''s your opinions...

Share this post


Link to post
Share on other sites
Very interesting and clever technique for saving memory is that normalmapped radiosity. Impressive.
Approximating local lighting for models with a cube seems not so bad too.
I wonder only how do they store those cubes - something like 3D grid as it is in Quake engines or something more dynamic.
I mean, those levels in HL2 are huuuge.
They have some hierarchical partitioning for that or dynamically recreate these cubes, but not quite sure which one.

Share this post


Link to post
Share on other sites
They don''t explain how they calculate the directional components of the radiosity. I don''t really understand that bit. The results look pretty decent but I don''t totally understand how it works yet.

Share this post


Link to post
Share on other sites
I think it should be easy to modify the "standard" radiosity algorithm to handle this.

When an energy transfer occurs from patch p1 to patch p2 :

1) compute the (normalized) direction using patch centers => vec d
2) project it in the local basis they described for each patch => vec d1 and d2
3) update values : p2.x += d2.x * color ; p1.x -= d1.x * color...


[edited by - overnhet on April 6, 2004 4:27:46 PM]

Share this post


Link to post
Share on other sites
The normal mapped radiosity is really impressive, very clever.
But I wonder how they handle dynamic reflections (reflections of dynamic objects) and dynamic shadows. That effects must be "fake". Another thing I don't like: The precalculated specular lighting. The engine must store an extreme amount of cubemaps to allow specular lighting at object surfaces. I don't think this saves memory...

[edited by - Brandlnator on April 6, 2004 4:49:55 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by overnhet
I think it should be easy to modify the "standard" radiosity algorithm to handle this.

When an energy transfer occurs from patch p1 to patch p2 :

1) compute the (normalized) direction using patch centers => vec d
2) project it in the local basis they described for each patch => vec d1 and d2
3) update values : p2.x += d2.x * color ; p1.x -= d1.x * color...


[edited by - overnhet on April 6, 2004 4:27:46 PM]


I don''t really follow you. With a normal radiosity solution you end up with a single colour value at each patch which represents the illumination of that patch. The value depends on the orientation of the patch relative to the rest of the scene. How do you generate a set of values that can be used for working out the illumination of an element with a different orientation? You could work out three colours for three different orientations of the patch (corresponding to the three basis vectors they give) but combining those the way they do doesn''t seem like the right thing to do mathematically to get an illumination value for a different orientation. Maybe that''s what they''re doing and it just looks ok but at the moment I can''t see the theoretical justification for doing it - it seems like it''s just not the right thing to do and that you''d get weird artifacts. I admit my understanding of radiosity is a little lacking in the details but from my high level understanding of it it doesn''t make sense.

Share this post


Link to post
Share on other sites
slide 11 : "In Radiosity Normal Mapping, we transform our basis into tangent space and compute light values for each vector."

There are 3 color values per patch, one for each axis of the local basis.

slide 12 :
lightmapColor[0] * dot( bumpBasis[0], normal )+lightmapColor[1] * dot( bumpBasis[1], normal )+lightmapColor[2] * dot( bumpBasis[2], normal )

The normal is projected onto the bumpBasis and the result is used to blend the 3 lightmap colors.

Now there are probably "theoritical issues" : even if it should perform better than real radiosity, it is not full GI Btw the expression "radiosity normal mapping" does not make sense : radiosity is not direction dependant whereas the values they compute are. Radiance (color = L(surface, direction)) should be more suited I think.

The way I see it, this technique computes a simplification of the "full" radiance function under the assumption that illumination varies linearly in "direction space". With that assumption they can express any direction in their basis and compute the associated color by linearly combining the 3 basis colors.

A gross oversimplification of the full model, but if it gives nice results such as those of the slides, who cares ?

Share this post


Link to post
Share on other sites
quote:
Btw the expression "radiosity normal mapping" does not make sense : radiosity is not direction dependant whereas the values they compute are. Radiance (color = L(surface, direction)) should be more suited I think.

This is the bit that's got me confused - normal radiosity can only give you one colour value per patch so I still don't really understand how they get 3 values for their 3 direction vectors.
quote:
The way I see it, this technique computes a simplification of the "full" radiance function under the assumption that illumination varies linearly in "direction space". With that assumption they can express any direction in their basis and compute the associated color by linearly combining the 3 basis colors.

That's pretty much what I was thinking they're doing as well and it seems to me that that assumption is just wrong. Say there's a patch directly facing a very bright light - it should be very brightly lit but linearly interpolating between the 3 basis vectors (none of which face the light directly) will give you a much less bright value than you should be getting. Am I missing something here?
quote:
A gross oversimplification of the full model, but if it gives nice results such as those of the slides, who cares ?

As you say, if it gives nice results then ultimately it doesn't really matter if it's not mathematically correct. I'm wondering if I'm missing something though because it seems to me like it shouldn't work very well at all. Maybe I'm understanding the technique and my concerns over the inaccuracies are valid but just don't prove important in practice. I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

[edited by - mattnewport on April 7, 2004 1:37:57 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by mattnewport
This is the bit that's got me confused - normal radiosity can only give you one colour value per patch so I still don't really understand how they get 3 values for their 3 direction vectors.

What they call "radiosity normal mapping" is NOT true radiosity : their stuff is direction dependant. I suspect they named it radiosity because the term "radiance" is much less common in gaming, whereas "radiosity normal mapping" is quite self-explanatory

As of computing the 3 colors per patch, the algorithm I gave in my first post should work but I haven't tested it. I don't know how you could call it : it looks like a sort of hybrid progressive radiosity / raytracing method since the direction is computed and projected onto the 2 local bases.

quote:
That's pretty much what I was thinking they're doing as well and it seems to me that that assumption is just wrong. Say there's a patch directly facing a very bright light - it should be very brightly lit but linearly interpolating between the 3 basis vectors (none of which face the light directly) will give you a much less bright value than you should be getting. Am I missing something here?

I think I messed up everything : this is not linear interpolation but basis shifting. Sorry, I am not used to mathematical explanations in english

To illustrate it I'll use the following notation (based on slide 10) :
* the blue vector of the basis is v1 and the associated color for a given patch p is p.c1
* red vector is v2...
* green vector is v3...

In your example the incoming light has direction d = (0,0,1) in this basis and color c. If we project it in patch basis we get :

p.c1 = c * (d . v1) = c * 1/sqrt(3)
p.c2 = c * (d . v2) = c * 1/sqrt(3)
p.c3 = c * (d . v3) = c * 1/sqrt(3)

If we want to apply radiosity normal mapping to a normal n = d to compute the color c' :

c' = p.c1 * (n . v1) + p.c2 * (n . v2) + p.c3 * (n . v3)
c' = 3 * c * 1/sqrt(3) * 1/sqrt(3)
c' = c

=> we get back the original color

Now say we want to apply it to a normal m = (0, 1/sqrt(2), 1/sqrt(2)) :

c' = p.c1 * (m . v1) + p.c2 * (m . v2) + p.c3 * (m . v3)
c' = c * 1/sqrt(3) * (-1/2 + 1/sqrt(6) + 1/2 + 1/sqrt(6) + 1/sqrt(6))
c' = c * 1/sqrt(3) * 3/sqrt(6)
c' = c * 1/sqrt(2)

=> c' is c * cosin weight

quote:
As you say, if it gives nice results then ultimately it doesn't really matter if it's not mathematically correct. I'm wondering if I'm missing something though because it seems to me like it shouldn't work very well at all. Maybe I'm understanding the technique and my concerns over the inaccuracies are valid but just don't prove important in practice. I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Well I am not a lighting expert and, as I said before, I haven't implemented it yet so I could be wrong too

[edited by - overnhet on April 7, 2004 3:18:17 PM]

Share this post


Link to post
Share on other sites
quote:
I think I messed up everything : this is not linear interpolation but basis shifting. Sorry, I am not used to mathematical explanations in english

Not your fault - I was getting a bit confused and using the wrong terminology. I think I understand what you were getting at with your algorithm from your first post now. It makes more sense to me now but I need to sit down and think it through a bit more carefully to convince myself why this works. I''m in the process of learning more about lighting and Global Illumination and I''m still a bit unclear on some of the concepts.

Share this post


Link to post
Share on other sites
quote:
I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Valve computes three colors per lumel. Those colors are computed using standard radiosity and only ONE surface normal per lumel is used. But the three colors stored in the lightmap are not computed using this surface normal. Instead, the lightmap uses the three basis vectors transformed into tangent space. So valve simply replaces the surface normal with three normals when computing the final lightmap color(s). This doesn''t impact radiosity at all.

Share this post


Link to post
Share on other sites
quote:
Original post by Brandlnator
quote:
I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Valve computes three colors per lumel. Those colors are computed using standard radiosity and only ONE surface normal per lumel is used. But the three colors stored in the lightmap are not computed using this surface normal. Instead, the lightmap uses the three basis vectors transformed into tangent space. So valve simply replaces the surface normal with three normals when computing the final lightmap color(s). This doesn''t impact radiosity at all.

I don''t follow you. How do they turn one colour at a lumel into three without changing the way they calcluate the radiosity? Normally the lightmap colour is just the result of the radiosity calculation at that lumel. How can you take one colour and three ''normals'' and turn it into three colours?

Share this post


Link to post
Share on other sites
Matt,
I don''t follow that explanation either. Since the radiosity method only deals with diffuse surfaces, the radient exitance is reflected equally in all directions and mainly depends on the patch normal. Therefore when looking at the cave screenshots (for each component) I can see no way such of getting such varying exitant radiences without actually changing the patch normal according to the basis vectors as you suggested.

But then, how do they refine the radiosity solution for a single basis vector. I mean what normal vectors are all the other patches using? It seems wrong to do the three refinement passes, one where all pacthes are using the same basis vector.

I just getting into GI too (doing my first MC sampling in my raytracer a few days ago) and I''m unsure about a lot of GI issues. One thing that has helped me getting started is the Global Illumination Compendium:
http://www.cs.kuleuven.ac.be/~phil/GI/
(in case you didn''t knew it ;-)

Share this post


Link to post
Share on other sites
quote:
Original post by mattnewport
Glad to hear I''m not the only one confused by this... Thanks for that link - it looks useful.


When you calculate the contribution of one lumel to other, you have two lumels -> you have a direction of which that energy came to to lumel shaded. Split that energy, using the direction to the second lumel with the tangent basis of the first one (the lumel being lighted) and viola - you get three values.

Share this post


Link to post
Share on other sites
Here''s how I understand it...

They use that basis/3 colours as an accumulation of all incoming lights at that lumel.

So lets say we have 3 lights topRight(0.5,0.5,0), top(0,1,0), and frontTop(0,0.5,0.5). That gets accumulated and stored in the light map as (0.5, 2.0, 0.5).

Then at rendering time, it''s only one pass, if a pixel normal in the normal map is (1.0,0,0), the shade becomes 0.5. If the pixel normal is (-1.0,0,0) the shade becomes 0.0 since there''s no incoming light in that direction.

I think this has mostly to do with using larger lightmap pixels (res: ~0.25 metres) on high frequency normal maps (res: ~1 cm).

Pretty clever idea...

Share this post


Link to post
Share on other sites
In slide 39, where they show all the textures used to render the cave example, there seems to be 3 _RGB_ lightmaps : one per basis vector.

That would make sense to store color instead of luminance, as it allows you to render the color bleeding effect.

Share this post


Link to post
Share on other sites
quote:
Original post by Zemedelec
quote:
Original post by mattnewport
Glad to hear I''m not the only one confused by this... Thanks for that link - it looks useful.


When you calculate the contribution of one lumel to other, you have two lumels -> you have a direction of which that energy came to to lumel shaded. Split that energy, using the direction to the second lumel with the tangent basis of the first one (the lumel being lighted) and viola - you get three values.

That sounds like what overnhet was saying which makes sense (though I''m still not convinced it''s the ''right'' thing to do mathematically). Brandlnator was saying the technique didn''t impact radiosity at all which I didn''t understand. With the method you describe you need to keep three values at every iteration when calculating the radiosity which does impact on your radiosity calculation.

Share this post


Link to post
Share on other sites
quote:
Original post by mattnewport
With the method you describe you need to keep three values at every iteration when calculating the radiosity which does impact on your radiosity calculation.


Yes, you need to keep three values, and the classical radiosity is just the sum of that three values. They just separate them, basing that separation on the direction of coming energy in each lumel.
So, during the process 3 lightmaps are used and just the radiosity transfer function is altered, to support 3 lightmaps and to separate incoming energy into 3 values. Separation is made at the energy arrival, but when the energy are emitted, it is single value - the sum of the 3 components.

Share this post


Link to post
Share on other sites