Suggestions for Keeping track of Lights and Entities in a scene. [Design Question]

Started by
7 comments, last by AgentC 11 years ago

Let me describe my Idea of how I wanted my Scene/Renderer classes to work. Then maybe someone with more experience can guide me and steer me the right direction.

The Scene Manager could arrange them in a Oct(quad) tree. Scene Manager also holds all the light information.

My Renderer class gets a list of visible entities from the Scene Manger when it goes to render the scene. It also gets a list of Lights from each entity and lets say I limit the lights that can be sent to the shaders by 8. It sends the closest lights (and all directional lights) to the shaders.


So I was thinking that there would be a "dirty" flag gets set when an entity is moved. And then once a frame the scene manager checks all dirty flags of entities and re-caches lights information (scene manager re adds it to the octree).

I Guess if I cached light information they would need dirty flags as well, and moving a light means iterating all entities to update light information for that light.

Or

Would it be better for the scene manager to return a query of lights when the redenderer asked for them based on the position of an object and this not cached? That way Scene manager doesn't have to keep checking dirty states etc..


Any thoughts or comments that someone can help me with?

Advertisement

According to my observation there are usually more geometry objects than lights in a scene.

Therefore to limit the amount of queries I'd perform a query for each visible light, to check which objects it influences. The type of the light dictates what type of query to use: frustum<>AABB for spot lights, sphere<>AABB for point lights and for directional lights, just take all the visible objects.

This is easiest to implement, at least at first, without any caching/dirtying. Just clear the visible objects' light lists each frame, perform the queries, and for each object found for a particular light, insert that light into those objects' light lists. Then profile if it's a significant bottleneck.

In case you bottleneck into the queries, you could cache the last object query result for each light, and invalidate that particular query if either the light moves, or any of the objects in the list have moved. That way you'd never need to go through the entire scene to invalidate cached results.

So I have shaders with different premutations of lights. (example, 1 light, 2 light, 3 light, 4 light).

So I was thinking each entity would keep track of which lights act upon it. But after reading your response, would it be better to render the ambient/texture for each entity, and then with multiple passes , set the framebuffer to ADD, and then render multiple passes using each light and that light lists' objects.?

So then the question becomes.

Is it better to have a MEGA shader that handles all the texture, lighting, materaials in one pass? (i'm talking about simply rendering an entity with lights, no reflection,glow, multipass techinques).

Or

Have smaller shaders and send the geometry multiple times. (ie. Each entity renders it's ambient. Then each light renders entities it influences?

Some people go by uber-shaders that handle everything thrown at them, and others prefer compiling specialized shaders for each possible lighting and material combination at run-time. If you go the uber-shader route you would probably want to keep all your rendering in a single pass if possible to keep your GPU work low.

What you are suggesting- splitting up the work amongst several shaders, would fit better in setups like deferred rendering and lighting. Sending the geometry multiple times sounds wasteful, and I've only seen it justified in light pre-pass rendering, in hardware setups where the framebuffer memory is limited.

If you have many lights in your scene visible at one time, deferred rendering may be a preferred choice. Your idea of drawing spheres as point lights and additively blending the framebuffer is commonly used in a deferred renderer.

My deferred rendering setup works as follows- have a Scene class that contains lists of Models, Directional Lights, and Point Lights. (Spotlights and others I would like to support eventually...) It directly adds all these objects to the Scene, calling the constructors for each. Note the lack of Cameras in the scene. I wanted the flexibility to easily swap in the current camera to show the Scene with.

In the drawing phase, the Scene and active Camera get passed to a SceneCuller class, which culls all the Models that fall out of the Camera's view. I'm just using a brute-force method for everything, it works so far with thousands of sphere-frustum tests (I don't use AABBs). Another culling function culls the Point Lights the same way. Then it's ready to draw all the objects to the screen.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

If your target hardware allows, generally I would advise (*) to have shader permutations for different amounts of lights just like you describe, to have less drawcalls and less framebuffer blending.

However in some cases it will result in wasted shading, for example if there is a huge object lit by small spot lights a lot of distance apart, or if you have a lot of overdraw.

For those cases you can always scale your multi-light shader system to use only the 1 light permutations + blending.

(*) = I actually don't practice what I preach, for my own engine currently only uses 1 light per pass in forward rendering. However it is fairly competitive, in usual cases faster than deferred rendering, as it combines the ambient pass with the first light, and aggressively marks the volume affected by a light into the stencil buffer before rendering the lit objects. Rendering 1 light per pass also allows to reuse shadowmaps.

CC Ricers and AgentC thanks for your comments.

I'll probably have no more than 2 or 3 lights on average, Maybe around 8 in some cases. Nothing like 300 lights or anything.

So Maybe to better understand Forward Lighting, let me know if I have the concept correct.

-Each Light Has a cache(list) of Entities in it's AABB <> Sphere.
-The Renderer Manager, renders the Ambient Material, Global Ambient, and Textures (if any) in One pass.
-Then blending is set to additive and depth testing set to Equel or less than. T
-Then Each light renders it's entities (only adding the LightAmbient, LightDiffuse, LightSpecular values)

*Directional Lights only need to know about visible Objects. So Maybe Now thinking about this some more. Maybe I should Include the Directional Lights with the First Ambient Pass sending in N Directional Lights. Then a scene with 3 point lights would require only 4 passes.

Is this a good approach?

Also The lights list/cache of entities would be updated when an entity is moved. Or is it still better to just do dynamic AABB checks on visible Objects at the time of rendering for each Point Light?

Even though the lit object queries would be made per light, you can still build the per-object light lists from those, and don't necessarily have to make the directional lights a special case, if you do for example the following:

- Render the "first pass" for all opaque objects: use replace blend mode + render ambient&emissive + the first n lights from the object's light list, whatever your shaders can handle

- For opaque objects whose all lights were still not rendered: switch to additive blend mode + render rest of the lights, Repeat until light lists for all opaque objects have been exhausted

- Transparent objects need to be rendered in back-to-front order, interleaving the first & additive passes

(ie. most distant object first pass, most distant object additive pass (if needed), second most distant object first pass...)

Note that in forward rendering you always have to sample the object's material textures again in each lit pass, your description above sounds like the lit passes would not sample material textures anymore, which sounds more like deferred rendering.

Ah yea, my mistake for not mentioning the texture being passed on each light pass. I am a bit confused though. Unless I read your original post wrong, it sounded like your advice was to have each Light know about which entities it lights, as opposed to the entities knowing which lights are near them. Then the renderer would just query the light objects and the Renderer would then get a list of entities to forward render with.

Am I understanding this correctly?

Also I've never done shadows, but I've read you need to render the scene from the Light's Viewpoint, So I would think this would be an advantage if the light knew which entities it cared about for faster shadow depth buffer rendering? Again, I've yet to implement shadows, but that is one of my learning goals.

Also, Thanks for the help, I really Appreciate it.

What I meant was to have both:

- The lights knowing the entities they light. This is what I'd consider the authoritative information (generated from scene queries), and it can persist from frame-to-frame if nothing moves.

- The entities' light lists can be created from the lights' entity lists. This is derived information, it's simplest to regenerate each frame, should not be a big hit. It's also where you'd cap the maximum lights if for example you want some particular object to only have 2 lights maximum.

Yes, for shadow rendering you certainly need to know the entities within a light's view in any case.

This topic is closed to new replies.

Advertisement