Lighting with multiple sources

Started by
6 comments, last by Cypher19 18 years, 10 months ago
Hi, In most cases when writing a shader, we only use 1 light-source for calculating its colors/shading model and that kind of stuff. However, when you look at games like Halflife2 you'll often see multiple lightsources. And not to mention Doom3 which does everything realtime. I assume most games are still using pre-computed lightmaps to shade/color their static levels. But what about the dynamic objects moving around in such a world? Do they just pick the nearest lightsource for there shading? Ifso, how to check if that light is really reaching the object? For example, when a table stands very close to a lightSource but with a wall between it, it should pick another source. Maybe they store which light to use in each level-node so that an object can check its lightsource with the help of its parent node? Or are they doing it completely different? Greetings, Rick
Advertisement
Most of the lights are completely static and tools will spit errors if they detect that more than 'n' lights intersect... You can mix both pick 'n' closest lights and multiple passes over the scene. I'm told multiple passes has the same drawback as deferred shading as it break AA...

I have no idea what is the best way to go about solving the problem but I have two methods right now.

The first one uses a set of shaders and has very strict lightning rules (1 global linear, 3 local point lights). It works but it's not so good because sometime there is less lights than the possible maximum and you have a maximum light limit per object and a very restrictive one at that... I did not bother with the multipass approach as I planned to do deferred shading anyway but multipass may be the best way for current hardware.

The second one that I posted here in the deferred shading thread has no restrictions at all and is the perfect solution if you have at least a 9800... Deferred shading has an high entry price because it involve large and multiple texture read but then eveything gets much cheaper and easier than with fixed shaders. It's also not practical and certainly pointless to mix per-pixel and per-vertex lightning with deferred shading something that the multipass method does not suffer from (as far as I'm concerned if you're going to do perpixel you might aswel go all the way and do perpixel on everything [smile]).

Maybe with SM3.0 (or SM2.0) there is a nice way to efficiently iterate over a set of lights but even then I don't see how you could beat the per-pixel light hardware occlusion deferring shading provides. Somebody also talked about a way to iterate over a light array in CG in some other thread but I know nothing on the subject.
Praise the alternative.
each light gets a different pass (doom3 and anything with reasonably complicated shading has to do this)
Thanks for the replies guys! I heard of deffered shading, but my hardware can't handle it and I suppose current games aren't using it yet. Anyway, if I understand it right, games like Doom do a pass of each light. So when there are 3 lights for some piece of geometry, the geometry is rendered three times right? And thus the shaders handle 1 light each time. Well, with the current hardware rendering polygons isn't that much of a problem anymore I think, but I still wonder how it can do bump-specular-(gloss?) shading for a full-screen multiple times with a descent framerate. If I make such a shader and I zoom onto a single polygon until it covers the complete screen, my framerate isn't that high anymore. Are there some special tricks to speed up those shaders? Of course, I know some optimalizations or the usage of integers instead of floats to gain speed but I'm not sure if that would be enough though.

I assume a piece of geometry is only rendered with lights in its range (that's probably because Doom3 is so dark, to prevent surfaces to catch too many lights). How does a surface know its local lights? Does the map-data contain some sort of pre-calculated information for that?

BTW Zedzeek, did you make all that stuff on that page? Really impressive!!
I would just have an array of lights and iterate through them. You could get the number of lights in the room from the program and have an array for the worst case scenario. (8-10 lights) The shader would only iterate through the number of lights that were in the room. Keep in mind this would require shaders 2.0 (which I think is perfectly fine)

Good luck!
Quote:Original post by spek
Thanks for the replies guys! I heard of deffered shading, but my hardware can't handle it and I suppose current games aren't using it yet. Anyway, if I understand it right, games like Doom do a pass of each light. So when there are 3 lights for some piece of geometry, the geometry is rendered three times right? And thus the shaders handle 1 light each time. Well, with the current hardware rendering polygons isn't that much of a problem anymore I think, but I still wonder how it can do bump-specular-(gloss?) shading for a full-screen multiple times with a descent framerate. If I make such a shader and I zoom onto a single polygon until it covers the complete screen, my framerate isn't that high anymore. Are there some special tricks to speed up those shaders? Of course, I know some optimalizations or the usage of integers instead of floats to gain speed but I'm not sure if that would be enough though.

its even worse than u think if u add in pointlight shadowmaps then u need maybe an extra 6 renders per light just to get the shadows. todays/yesterdays cards are major polygon pushes so this aint to much of issue

Quote:I assume a piece of geometry is only rendered with lights in its range (that's probably because Doom3 is so dark, to prevent surfaces to catch too many lights). How does a surface know its local lights? Does the map-data contain some sort of pre-calculated information for that?

it can do but its a simple test anyway boundingobject->lightbounding sphere.
in my game u can have unlimited lights onscreen but cause they noramlly only affect a specific area of the worldscene frame rate doesnt suck to badly

Quote:Original post by spek
Thanks for the replies guys! I heard of deffered shading, but my hardware can't handle it and I suppose current games aren't using it yet. Anyway, if I understand it right, games like Doom do a pass of each light. So when there are 3 lights for some piece of geometry, the geometry is rendered three times right? And thus the shaders handle 1 light each time. Well, with the current hardware rendering polygons isn't that much of a problem anymore I think, but I still wonder how it can do bump-specular-(gloss?) shading for a full-screen multiple times with a descent framerate. If I make such a shader and I zoom onto a single polygon until it covers the complete screen, my framerate isn't that high anymore. Are there some special tricks to speed up those shaders? Of course, I know some optimalizations or the usage of integers instead of floats to gain speed but I'm not sure if that would be enough though.

its even worse than u think if u add in pointlight shadowmaps then u need maybe an extra 6 renders per light just to get the shadows. todays/yesterdays cards are major polygon pushes so this aint to much of issue

Quote:I assume a piece of geometry is only rendered with lights in its range (that's probably because Doom3 is so dark, to prevent surfaces to catch too many lights). How does a surface know its local lights? Does the map-data contain some sort of pre-calculated information for that?

it can do but its a simple test anyway boundingobject->lightbounding sphere.
in my game u can have unlimited lights onscreen but cause they noramlly only affect a specific area of the worldscene frame rate doesnt suck to badly

Quote:its even worse than u think if u add in pointlight shadowmaps then u need maybe an extra 6 renders per light just to get the shadows. todays/yesterdays cards are major polygon pushes so this aint to much of issue


Yeah, but you also consume a VERY large amount of power because you need to fill up, for decent shadows, six 1024x1024 surfaces. Even if the amount of calculations done on each pixel is small, the fill rate is a huge killer, and that only gets DECENT stuff. In a practical game environment 1024x1024 shadows would not look consistently good.

This topic is closed to new replies.

Advertisement