Handling multiple lights

Started by
5 comments, last by Infinisearch 7 years, 1 month ago

Various solutions on the web for handling multiple light sources in forward rendering hardcode a fixed number of lights in the shader or use a texture to encode the lights. Is it somehow possible (and for which HLSL shader model) to directly allocate a variable number of lights?

Or is the forward rendering approach not really useful if one wants to add shadows anyway and should a deferred rendering approach be preferred for each light separately (+ shadow mapping)? How scalable in terms of the number of point lights is the basic shadow mapping algorithm for common scenes?

🧙

Advertisement
Shadows work pretty much the same in forward and deferred.

The only way to have a variable sized array in HLSL is to use a texture/buffer. Constants / cbuffers must have a fixed size... However, you can define a maximum size and then use a variable to hold the actual size. That lets you write a shader that supports, say, up to 8 lights.

You can draw one light per pass in forward too. The first light for an object uses opaque blending and LEQUAL depth, then all subsequent lights draw that object again with additive blending and EQUAL depth testing.
You can use this in combination with the fixed sized light list technique too -- e.g. the first 4 lights opaque, then thr next 4 draw the object again additively.
Also if you use ambient lighting, you need to make sure it only applies during the first/opaque pass, or you'll double up.

Shadows work pretty much the same in forward and deferred.

Don't you mean lighting (since there is no visibility component except some finite volume in space a light affects)?


e.g. the first 4 lights opaque, then thr next 4 draw the object again additively.

Does one normally uses this approach on the level of every model part + material or at the level of the full world (i.e. render the world opaque/additive with these x lights etc.)?

🧙

Working on stuff like this with D3D9 at the moment. My journal has details if you are interested.
Don't you mean lighting (since there is no visibility component except some finite volume in space a light affects)?

I think he means shadows, in that sampling shadow maps is done the same way in both forward and deferred.

-potential energy is easily made kinetic-

Render geometry visible to light from the perspective of the camera and at each pixel (or is it fragment) transform the point to the light's space

Isn't this by definition deferred?

Currently I have not implemented shadow mapping, but since you are referring to the light's space i start wondering what the most convenient space is for shading. I currently work in view space, since the view vector is obtained practically for free. The same applies for the shading point in view space as one obtains it before going to projection space. Furthermore, I could world-to-view, view-to-projection and world-to-view inverse transform stay fixed for the full frame.

🧙

I deleted that part for a reason... its shadow mapping for forward, I'm not sure about shadow mapping for deferred. IIRC for deferred instead of rendering object geometry you render light geometry (into a light accum buffer). But I'm not to familiar with deferred, so hopefully Hodgman will answer back. But the basic principle is the same... sample the shadowmap at every relevant pixel.

Isn't this by definition deferred?

No deferred implies a G-buffer.

As to spaces you need the point in the light's space so you can sample the correct portion of the shadow map.

-potential energy is easily made kinetic-

This topic is closed to new replies.

Advertisement