Organizing shadow map textures in GPU memory

Started by
6 comments, last by mv348 11 years, 6 months ago
So I am trying to decide how I want to store the shadow map data for various kinds of lights in the GPU memory.

The way I see it I can either:

1. Generate an individual texture for each light.

2. Generate a texture array for each light (i.e. a global, directional light uses a texture array of length 4 for the 4 cascaded shadow
maps, a point light uses a texture array of length 6 for all the different directions, a spot light uses an array of length 1, etc.)

3. Generate a single, massive texture array, lights reference it by index.


My intuition tells me that 2 might be the best option because it would be easier to create and destroy new lights, but on the other hand, a nice thing about 3 is that I can dynamically assign shadow maps to lights depending on proximity of lights to the viewer; so as the camera moves, nearby lights are assigned available shadow maps, and distant lights give up their shadow maps. Also, if this array size is fixed, it avoids repeated loading and unloading of texture data, and fixes a clear limit on the number of shadow renders, and the size of shadow map data in memory.

What do you all think?
Advertisement
Just some more food for thought:
*All of the options listed assume that you need to assign each light to it's own shadow map resource. There's also rendering pipeline designs where every light can share the same shadow map buffer (e.g. for each light: render shadow, apply shadow), which simplifies things somewhat.
*For lights that require multiple shadow maps (point lights, cascades, etc), you can also use viewports within a larger texture instead of texture arrays.
Can you give me a little more detail about that kind of approach? Are you talking about deferred shadows? Also, why would you go through the trouble of using an enlarged viewport instead of a texture array? Are texture arrays undesirable for any reason?

Can you give me a little more detail about that kind of approach? Are you talking about deferred shadows? Also, why would you go through the trouble of using an enlarged viewport instead of a texture array? Are texture arrays undesirable for any reason?

Adjusting viewport allows you to dynamically change shadow map resolution without reallocating textures. In general you need less pixels for small/distant lights.
And yes - pure deferred rendering requires only single shadow map.
Lauris Kaplinski

First technology demo of my game Shinya is out: http://lauris.kaplinski.com/shinya
Khayyam 3D - a freeware poser and scene builder application: http://khayyam.kaplinski.com/
Are you talking about deferred shadows?
Yes, with deferred shadows you collect lighting information into a full screen buffer (sometimes called a shadow collector), which is later used by the lighting shaders.
This is also nice because it breaks the dependency between lighting and shadowing into two different shaders. On my last game (which was forward rendered - but this idea works equally well with deferred shading), for different scenes, we supported either using screen space shadows (SSAO-inspired), stencil shadows, projected shadows, static lightmaps, and/or regular shadow maps to fill in the "shadow collector" texture, which was later used by the forward-lighting shaders (without them having to care where shadows came from). We could even mix multiple of the above shadowing techniques for different lights/objects, and the regular forward-lighting shaders stayed the same!

On another game I've worked on, instead of one shadow map per light, we used one shadow map per light per object. So if there were 10 cars on a track with 2 lights, we'd do 20 shadow-map passes, where each pass only rendered a single car. Each pass could share the same shadow map resource (making great use of it's resolution) after the previous pass had been applied to the full-screen collector.

http://developer.amd..._CryEngine2.pdf
http://aras-p.info/b...ed-shadow-maps/
http://mynameismjp.w...ow-maps-sample/

Besides "deferred shadows", regular deferred shading can be implemented in the same "recycling" manner without making use of a full-screen shadow collector. Instead you simply render the shadow map(s) for a light, and then perform the deferred light accumulation for that light, then goto next light, etc...
Also, why would you go through the trouble of using an enlarged viewport instead of a texture array? Are texture arrays undesirable for any reason?[/quote]As Lauris mentioned above, you can dynamically resize your viewports to have different resolution shadow maps. e.g. a 2048[sup]^2[/sup] texture could hold 4*1024[sup]^2[/sup] maps, or 1*1536[sup]^2[/sup] plus 7*512[sup]^2[/sup] maps.
Also, older hardware doesn't support texture arrays, so depending on your minimum specifications, they might not be an option.
Thanks for all your responses!

Hodgeman, I'm a little puzzled about what you said here:

Besides "deferred shadows", regular deferred shading can be implemented in the same "recycling" manner without making use of a full-screen shadow collector. Instead you simply render the shadow map(s) for a light, and then perform the deferred light accumulation for that light, then goto next light, etc...
[/quote]

Keep in mind that I only have a rough idea of how deferred shading works at present, though (as you can see from my other thread) I'm strongly considering that route. So are you saying deferred-shading makes the usual deferred shadows technique unecessary, and there's a simpler way to do it?

Either way, I'd just like to verify that i understand the idea of (ordinary) deferred shadows properly. Here's my current understanding ( I admit some of this is based off intuition)


for each light:
render all objects from lights point of view (depth only) into shadow map
render all objects from cameras point of view (depth only) testing for shadows.
if it is in the shadow, set the full screen texture pixel to 1.

blur the shadow texture
render all objects, darkening each pixel if the texture element is 1.


Of course, objects not in view of the light can be culled out.

Is this the right idea?
I'd just like to verify that i understand the idea of (ordinary) deferred shadows properly. Here's my current understanding ( I admit some of this is based off intuition)
...
Of course, objects not in view of the light can be culled out. Is this the right idea?
Yes that would work. Usually though you render 0 for 'in shadow' and 1 for 'no shadow', so then when lighting you can just multiply the light value against this texture.

Also, typically this technique is optimised so that you don't have to repeatedly draw each object from the camera's point of view. Instead you can combine it with the "z-pre-pass" idea and the "reconstruct position from depth" idea:
render all objects from cameras point of view writing out depth only.
for each light:
render all objects from lights point of view (depth only) into shadow map
render a fullscreen quad, which reconstructs each pixel's position from it's depth, and tests that position for shadows.
write out shadow result (0/1)
blur the shadow texture
render all objects, multiplying lighting values with shadow values
I'm a little puzzled about what you said here: ... Keep in mind that I only have a rough idea of how deferred shading works at present, though (as you can see from my other thread) I'm strongly considering that route. So are you saying deferred-shading makes the usual deferred shadows technique unecessary, and there's a simpler way to do it?[/quote]With deferred shading, you've got your G-buffer pass, where you collect all the attributes of each pixel by rendering all the objects. Then you switch to your 'lighting buffer', and either clear it to black, or initialize it with ambient lighting. Then for each light, you additively blend that light's results into the lighting buffer. Before you render each light, you can calculate the shadow map for that light (into a recycled shadow map resource).
render all objects from cameras point of view writing out G-buffer attributes (position/depth, colour, normal, etc)
render a fullscreen quad, which writes out ambient lighting (by reading G-Buffer attributes)
for each light:
render all objects from lights point of view (depth only) into shadow map
render a fullscreen quad (or geometry that covers the bounding volume of the light),
which tests for shadows and calculates diffuse/specular lighting (using the shadow map and G-Buffer attributes)
write out light contribution using additive blending
Thanks for those great explanations, Hodgeman! While the details of deferred rendering are still a bit fuzzy to me, I'm sure your description will be clear once I've read up more on it.

The first algorithm you gave makes a whole lot of sense though.Thanks so much!

This topic is closed to new replies.

Advertisement