Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

cseger

Source engine shading model

This topic is 5183 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Did you guys read this: http://www.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf ...about Valve''s Source engine shading model. Seems like a good solution to me, however, "designer-placed sample points for specular cubemaps"... I guess it means that they only put the spec-cubemaps where really necessary, instead of automating it, so they save some tex-mem... What''s your opinions...

Share this post


Link to post
Share on other sites
Advertisement
Very interesting and clever technique for saving memory is that normalmapped radiosity. Impressive.
Approximating local lighting for models with a cube seems not so bad too.
I wonder only how do they store those cubes - something like 3D grid as it is in Quake engines or something more dynamic.
I mean, those levels in HL2 are huuuge.
They have some hierarchical partitioning for that or dynamically recreate these cubes, but not quite sure which one.

Share this post


Link to post
Share on other sites
They don''t explain how they calculate the directional components of the radiosity. I don''t really understand that bit. The results look pretty decent but I don''t totally understand how it works yet.

Share this post


Link to post
Share on other sites
I think it should be easy to modify the "standard" radiosity algorithm to handle this.

When an energy transfer occurs from patch p1 to patch p2 :

1) compute the (normalized) direction using patch centers => vec d
2) project it in the local basis they described for each patch => vec d1 and d2
3) update values : p2.x += d2.x * color ; p1.x -= d1.x * color...


[edited by - overnhet on April 6, 2004 4:27:46 PM]

Share this post


Link to post
Share on other sites
The normal mapped radiosity is really impressive, very clever.
But I wonder how they handle dynamic reflections (reflections of dynamic objects) and dynamic shadows. That effects must be "fake". Another thing I don't like: The precalculated specular lighting. The engine must store an extreme amount of cubemaps to allow specular lighting at object surfaces. I don't think this saves memory...

[edited by - Brandlnator on April 6, 2004 4:49:55 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by overnhet
I think it should be easy to modify the "standard" radiosity algorithm to handle this.

When an energy transfer occurs from patch p1 to patch p2 :

1) compute the (normalized) direction using patch centers => vec d
2) project it in the local basis they described for each patch => vec d1 and d2
3) update values : p2.x += d2.x * color ; p1.x -= d1.x * color...


[edited by - overnhet on April 6, 2004 4:27:46 PM]


I don''t really follow you. With a normal radiosity solution you end up with a single colour value at each patch which represents the illumination of that patch. The value depends on the orientation of the patch relative to the rest of the scene. How do you generate a set of values that can be used for working out the illumination of an element with a different orientation? You could work out three colours for three different orientations of the patch (corresponding to the three basis vectors they give) but combining those the way they do doesn''t seem like the right thing to do mathematically to get an illumination value for a different orientation. Maybe that''s what they''re doing and it just looks ok but at the moment I can''t see the theoretical justification for doing it - it seems like it''s just not the right thing to do and that you''d get weird artifacts. I admit my understanding of radiosity is a little lacking in the details but from my high level understanding of it it doesn''t make sense.

Share this post


Link to post
Share on other sites
slide 11 : "In Radiosity Normal Mapping, we transform our basis into tangent space and compute light values for each vector."

There are 3 color values per patch, one for each axis of the local basis.

slide 12 :
lightmapColor[0] * dot( bumpBasis[0], normal )+lightmapColor[1] * dot( bumpBasis[1], normal )+lightmapColor[2] * dot( bumpBasis[2], normal )

The normal is projected onto the bumpBasis and the result is used to blend the 3 lightmap colors.

Now there are probably "theoritical issues" : even if it should perform better than real radiosity, it is not full GI Btw the expression "radiosity normal mapping" does not make sense : radiosity is not direction dependant whereas the values they compute are. Radiance (color = L(surface, direction)) should be more suited I think.

The way I see it, this technique computes a simplification of the "full" radiance function under the assumption that illumination varies linearly in "direction space". With that assumption they can express any direction in their basis and compute the associated color by linearly combining the 3 basis colors.

A gross oversimplification of the full model, but if it gives nice results such as those of the slides, who cares ?

Share this post


Link to post
Share on other sites
quote:
Btw the expression "radiosity normal mapping" does not make sense : radiosity is not direction dependant whereas the values they compute are. Radiance (color = L(surface, direction)) should be more suited I think.

This is the bit that's got me confused - normal radiosity can only give you one colour value per patch so I still don't really understand how they get 3 values for their 3 direction vectors.
quote:
The way I see it, this technique computes a simplification of the "full" radiance function under the assumption that illumination varies linearly in "direction space". With that assumption they can express any direction in their basis and compute the associated color by linearly combining the 3 basis colors.

That's pretty much what I was thinking they're doing as well and it seems to me that that assumption is just wrong. Say there's a patch directly facing a very bright light - it should be very brightly lit but linearly interpolating between the 3 basis vectors (none of which face the light directly) will give you a much less bright value than you should be getting. Am I missing something here?
quote:
A gross oversimplification of the full model, but if it gives nice results such as those of the slides, who cares ?

As you say, if it gives nice results then ultimately it doesn't really matter if it's not mathematically correct. I'm wondering if I'm missing something though because it seems to me like it shouldn't work very well at all. Maybe I'm understanding the technique and my concerns over the inaccuracies are valid but just don't prove important in practice. I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

[edited by - mattnewport on April 7, 2004 1:37:57 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by mattnewport
This is the bit that's got me confused - normal radiosity can only give you one colour value per patch so I still don't really understand how they get 3 values for their 3 direction vectors.

What they call "radiosity normal mapping" is NOT true radiosity : their stuff is direction dependant. I suspect they named it radiosity because the term "radiance" is much less common in gaming, whereas "radiosity normal mapping" is quite self-explanatory

As of computing the 3 colors per patch, the algorithm I gave in my first post should work but I haven't tested it. I don't know how you could call it : it looks like a sort of hybrid progressive radiosity / raytracing method since the direction is computed and projected onto the 2 local bases.

quote:
That's pretty much what I was thinking they're doing as well and it seems to me that that assumption is just wrong. Say there's a patch directly facing a very bright light - it should be very brightly lit but linearly interpolating between the 3 basis vectors (none of which face the light directly) will give you a much less bright value than you should be getting. Am I missing something here?

I think I messed up everything : this is not linear interpolation but basis shifting. Sorry, I am not used to mathematical explanations in english

To illustrate it I'll use the following notation (based on slide 10) :
* the blue vector of the basis is v1 and the associated color for a given patch p is p.c1
* red vector is v2...
* green vector is v3...

In your example the incoming light has direction d = (0,0,1) in this basis and color c. If we project it in patch basis we get :

p.c1 = c * (d . v1) = c * 1/sqrt(3)
p.c2 = c * (d . v2) = c * 1/sqrt(3)
p.c3 = c * (d . v3) = c * 1/sqrt(3)

If we want to apply radiosity normal mapping to a normal n = d to compute the color c' :

c' = p.c1 * (n . v1) + p.c2 * (n . v2) + p.c3 * (n . v3)
c' = 3 * c * 1/sqrt(3) * 1/sqrt(3)
c' = c

=> we get back the original color

Now say we want to apply it to a normal m = (0, 1/sqrt(2), 1/sqrt(2)) :

c' = p.c1 * (m . v1) + p.c2 * (m . v2) + p.c3 * (m . v3)
c' = c * 1/sqrt(3) * (-1/2 + 1/sqrt(6) + 1/2 + 1/sqrt(6) + 1/sqrt(6))
c' = c * 1/sqrt(3) * 3/sqrt(6)
c' = c * 1/sqrt(2)

=> c' is c * cosin weight

quote:
As you say, if it gives nice results then ultimately it doesn't really matter if it's not mathematically correct. I'm wondering if I'm missing something though because it seems to me like it shouldn't work very well at all. Maybe I'm understanding the technique and my concerns over the inaccuracies are valid but just don't prove important in practice. I still suspect I might be misunderstanding the technique though (how they generate the 3 basis colours - I understand how they use them).

Well I am not a lighting expert and, as I said before, I haven't implemented it yet so I could be wrong too

[edited by - overnhet on April 7, 2004 3:18:17 PM]

Share this post


Link to post
Share on other sites
quote:
I think I messed up everything : this is not linear interpolation but basis shifting. Sorry, I am not used to mathematical explanations in english

Not your fault - I was getting a bit confused and using the wrong terminology. I think I understand what you were getting at with your algorithm from your first post now. It makes more sense to me now but I need to sit down and think it through a bit more carefully to convince myself why this works. I''m in the process of learning more about lighting and Global Illumination and I''m still a bit unclear on some of the concepts.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!