Sorting point lights on impact?

This topic is 2155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi,

Just wondering, I know have either max. 4 or 8 point lights affecting a mesh (maximum in my shader). In most of the cases I get away with max 4 though. Only for terrain it might be an issue, but I have to solve that with smaller meshes and/or lightmaps anyway (also for culling/ clipping).

But.. say I come in a situation where more then 4 point lights are affecting a mesh, how would you then order them up, so the 4 with biggest impact will be used?

Some thoughts:

- I have to sort and set them (shader constants), before actually calculating the exact impact (in the shader)

- I could sort them based on distance between center of the point light and the mesh, also taking the radius into the equation

What would you do/ do you do?

To be honest, I'm not sure if I'll develop something for it right away, since the situation isn't there yet. Just curious what the best practices are.

Ps.; deferred rendering is not something I'm getting into right now, step by step learning

Edited by cozzie

Share on other sites

I'm not super experienced with this, and so I'm not sure how much merit my ideas have, but it seems to me that you'll want to base it on the position of the model and the position of the light (like you said), taking the radius into account. You would then apply whatever attenuation falloff the light uses,

You may also want to take the human perception of color into account. If you don't know what that's all about, the eye is more sensitive to green than it is to red, and so you may have a red light that is of the same intensity as a green light, but the green light will appear brighter, and so you should choose the green light over the red light. The human perception of the RGB wavelengths is weighted like so: r=0.2125, g=0.7154, b=0.0721. So, if you dot your attenuated light color with <0.2125, 0.7154, 0.0721> you should get a single scalar value that tells you how bright the light appears to the human eye. That may give an extra bit of accuracy.

I've thought about maybe giving each model a normal that you can use to test how closely-oriented the model is toward the light source, but since most models have surfaces pointing in all directions, that probably won't help much. It may be of some use for terrain chunks and that sort of thing, if they consist of surfaces that are mostly oriented in a single direction.

I believe on Doom 3, they actually did things from the light's perspective. That is, they would iterate through the lights each frame, and for each light, they would try to figure out which models the light could affect. In this way, they would build a draw list for each light. Then, they would bind some shader that did the shading for a given light, and then draw all of the geometry in that light's list, additively blending as they go. I've never tried it this way because it seems like it'd be inefficient to transform the same geometry multiple times (if affected by more than one light). I think they probably minimized the problem by having dark environments where lights had minimal overlap. But I'm not sure, because I don't actually remember the game all that well.

Share on other sites

I believe on Doom 3, they actually did things from the light's perspective. That is, they would iterate through the lights each frame, and for each light, they would try to figure out which models the light could affect. In this way, they would build a draw list for each light. Then, they would bind some shader that did the shading for a given light, and then draw all of the geometry in that light's list, additively blending as they go. I've never tried it this way because it seems like it'd be inefficient to transform the same geometry multiple times (if affected by more than one light). I think they probably minimized the problem by having dark environments where lights had minimal overlap. But I'm not sure, because I don't actually remember the game all that well.
That was also done because of the way the shadows were generated in the engine - using shadow volumes.  It makes sense to iterate by light in such a case where you have to generate the stencil buffer for the shadow volume for each light...

If you are doing forward rendering, as the OP mentioned, then it is clearly better to figure out on CPU which light affects each mesh.  You can group them intelligently to minimize the number of state changes needed for a given group of meshes to be rendered, but there is no reason to try and sort that out in the GPU.

Share on other sites
Thanks, I'm currently rendering per material, looping through meshes that use the material, then their instances (submeshes using the material).

I'll sort the affecting point lights using a combination of wpos meshintance center - point light pos + point light range

• 21
• 13
• 9
• 17
• 13