Shadowing Techniques
I've been trying to put together the various techniques for rendering shadows, thier cost, inaccuracies, and the required hardware support to make them viable. For the moment, I'm focusing just on shadows, so something like bump-mapping isn't listed.
Any corrections or additional techniques? And does the double-fail (Carmack's Reverse) stencil-buffer technique perform self-shadowing?
Technique
Implementation
Inaccuracy
Hardware
Cost
Ground-Plane Shadows
Rerender using grey and plane-projection matrix
Only cast shadows onto flat axis-aligned planes
Hardware z-buffer
Additional rendering of shadow-casting objects
Light Maps
Pre-calculated textures
Pre-computed, not dynamic
Alpha-blending w/multiple-passes or multi-texturing
One additional texture/pass (for all lights)
Attenuation Maps
Pre-calculated LUT textures
Basic dynamic shadow, only works well for point-sources
Alpha-blending w/multiple-passes or multi-texturing
Additional texture/pass per light
Shadow Volume
"Carmack's Reverse"
Produces correct shadows
Stencil Buffer (how many bits? 8+?)
Extra scene render per 'shadow' (light) source, plus additional pass for viewed scene
Shadow Volume
Projected shadow mapping
Requires sufficent vertex density to produce correct shadows
Hardware texture matrix accel, render-to-texture
Additional texture w/ multi-texruting or multiple-passes with subtractive blending plus computational-intensive CPU-based shadow-volume/target-vertex "collision" test
Soft Shadows
?
?
Pixel-shader
?
The "ground plane shadows" i believe are what are refered to as "shadow maps" and are certainly not limited to axis aligned planes, remember, you can transform anything into any space, it would not be hard to transform a texture like that onto an arbitrary triangle
correct me if im wrong but im pretty confident
-Dan
correct me if im wrong but im pretty confident
-Dan
No, the ground-plane shadow flattens a mesh into a planar shape, and then renders it with a gray or transparent black color.
Also, attenuation maps and vertex lighting don't produce shadows at all. They represent a factor of the lighting equation, not light occlusion information.
You missed out:
Shadow mapping
Render depth information for each light, then render scene with depth information projected onto geometry, doing a pass/fail test for occlusion
Suffers from aliasing artifacts due to limited texture resolution
Requires shader hardware
1 render per light per relevant update (can be cached while scene is static)
Also, attenuation maps and vertex lighting don't produce shadows at all. They represent a factor of the lighting equation, not light occlusion information.
You missed out:
Shadow mapping
Render depth information for each light, then render scene with depth information projected onto geometry, doing a pass/fail test for occlusion
Suffers from aliasing artifacts due to limited texture resolution
Requires shader hardware
1 render per light per relevant update (can be cached while scene is static)
ZFail stencil (aka Carmack's reverse) does handle self shadowing.
The topic of shadows is IMO the most complex one in graphics programming. Here are a couple of additional details you might be interested in:
Projected shadows
The idea is relatively similar to shadow mapping but does not handle self shadowing, but it does not have any particular requirement.
You assign a texture to an object, and every time you need to update the shadow, you clear up the texture to white and render your object in black into this texture. Then at render time you project this texture into the scene, but only on the part of the scene inside the shadow frustum.
Dark mapping
Name is (copyright Ysaneya). I never heard anybody use this technique but in my engine i will mostly rely on this one, which is an adaptation of Lightmapping. Lightmapping is obsolete now; it simply does not work well with per-pixel lighting, which any state-of-the-art engine must have today. Dark maps are similar to lightmaps except that:
- they do not encode a color, but a "grayscale" brightness.
- there is one dark map per object and per light.
At run-time, you have one pass per light. When you paint the objects affected by your light, you render them using per-pixel lighting as usual but you modulate the end result by the dark map. This allows you to have static lights with soft shadows, it's pretty fast and does not have any particular requirement, but you can no longer use radiosity or global illumination solutions. Another advantage is that you can turn lights on/off independantly of each other.
Shadow mapping
Probably the best technique to handle dynamic shadows, as stencil shadows don't scale very well to high polycounts. However shadow mapping is also IMO the hardest one to implement "correctly".
- if you want to render point lights, you'll have to use a cube shadow map. Unfortunately our video cards don't support depth cube textures, so you must cheat by using a pixel shader. You can use a floating point render cube texture, or an RGBA cube texture in which you encode/decode the depth into 4 bytes in a pixel shader. It works but it's tricky. That's what i used in this image:
- to get ride of artifacts near the occluder, you must use second-depth shadow mapping. This means actually rendering into two shadow maps, and averaging the depth in a pixel shader.
- you cannot get ride of the aliasing artifacts at the shadow boundary. You can minimize them by increasing the shadow map resolution, but even at 1024 or 2048, you will have troubles rendering a light at a good quality after 10 meters. Techniques like perspective shadow maps or trapezoidal shadow maps can help but they are very tricky (and do not give good results in all cases) and require that you update your shadow map every frame (while with standard shadow mapping you can cache your results).
- to lessen aliasing you can soften your shadows. You take N samples an average them. In addition to NVidia's PCF, this can give pretty good results, at the cost of speed obviously. The best solution for soft shadows is to actually use pixel shader 3 with a conditional: you take 4 samples, if they are all white or black, you simply average them; else you take an additional amount of samples (like 60 other random samples) and average them. Sampling 64 textures is obviously a performance killer but you only need this in the shadow penumbra, which is likely to be small in screen space..
Y.
The topic of shadows is IMO the most complex one in graphics programming. Here are a couple of additional details you might be interested in:
Projected shadows
The idea is relatively similar to shadow mapping but does not handle self shadowing, but it does not have any particular requirement.
You assign a texture to an object, and every time you need to update the shadow, you clear up the texture to white and render your object in black into this texture. Then at render time you project this texture into the scene, but only on the part of the scene inside the shadow frustum.
Dark mapping
Name is (copyright Ysaneya). I never heard anybody use this technique but in my engine i will mostly rely on this one, which is an adaptation of Lightmapping. Lightmapping is obsolete now; it simply does not work well with per-pixel lighting, which any state-of-the-art engine must have today. Dark maps are similar to lightmaps except that:
- they do not encode a color, but a "grayscale" brightness.
- there is one dark map per object and per light.
At run-time, you have one pass per light. When you paint the objects affected by your light, you render them using per-pixel lighting as usual but you modulate the end result by the dark map. This allows you to have static lights with soft shadows, it's pretty fast and does not have any particular requirement, but you can no longer use radiosity or global illumination solutions. Another advantage is that you can turn lights on/off independantly of each other.
Shadow mapping
Probably the best technique to handle dynamic shadows, as stencil shadows don't scale very well to high polycounts. However shadow mapping is also IMO the hardest one to implement "correctly".
- if you want to render point lights, you'll have to use a cube shadow map. Unfortunately our video cards don't support depth cube textures, so you must cheat by using a pixel shader. You can use a floating point render cube texture, or an RGBA cube texture in which you encode/decode the depth into 4 bytes in a pixel shader. It works but it's tricky. That's what i used in this image:
- to get ride of artifacts near the occluder, you must use second-depth shadow mapping. This means actually rendering into two shadow maps, and averaging the depth in a pixel shader.
- you cannot get ride of the aliasing artifacts at the shadow boundary. You can minimize them by increasing the shadow map resolution, but even at 1024 or 2048, you will have troubles rendering a light at a good quality after 10 meters. Techniques like perspective shadow maps or trapezoidal shadow maps can help but they are very tricky (and do not give good results in all cases) and require that you update your shadow map every frame (while with standard shadow mapping you can cache your results).
- to lessen aliasing you can soften your shadows. You take N samples an average them. In addition to NVidia's PCF, this can give pretty good results, at the cost of speed obviously. The best solution for soft shadows is to actually use pixel shader 3 with a conditional: you take 4 samples, if they are all white or black, you simply average them; else you take an additional amount of samples (like 60 other random samples) and average them. Sampling 64 textures is obviously a performance killer but you only need this in the shadow penumbra, which is likely to be small in screen space..
Y.
Do you have a link to further info on dark mapping? I couldn't find anything good after a quick Google of it.
Quote:ZFail stencil (aka Carmack's reverse) does handle self shadowing.
I'd like to elaborate on that subject:
Yes, the typical stencil shadow solution (ZPass and Zfail) support self-shadowing, but Doom3, the most prominent stencil shadow user turns off a lot of self-shadowing because of one large flaw. That flaw is that when a polygon's normal just goes past the point where it's orthogonal to the light source, it goes full black even if it should still be in the light. However, Doom3 kind of wimped out by turning off a lot of self shadowing flags (basically not rendering the object to the master depth buffer used to determine if a pixel is shadowed or not) when a far more visually appealing solution would have been to use depth biases on the shadow volumes. Not only does this retain self-shadowing, but removes the near-orthogonal problem.
Anywyas, back to shadow maps. I think the most practical solution to anti-aliasing the shadow map is multisampling. The Unreal3 engine apparently uses 16x sampling, and in the videos that have been released, the shadows (at least those that use shadow maps) are, needless to say, virtually flawless. I've tried as hard as I can to find any problems with the shadows, and come up blank.
Also, for shadow maps on a point light, if you're using a cube depth map (through some render-to-texture), you have to keep in mind that if an objects passes the boundaries of each face it needs to be rendered again for the other faces, or else you'll have some pretty...interesting shadows.
Lastly, in regards to PSMs, I haven't seen too many implementations of them (actually, only one in motion: the one 3DMark05 test with the airship) and needless to say,I think they absolutely suck. I saw a lot of flickering in the shadows in the 3DM05 test, and from what I hear you can get some odd white speckles in a PSM implementation if polygons get angled to the light correctly.
So, in the next year or so while shadow maps become 'perfectly' implemented, I think things will be very interesting, but in the end very rewarding because shadow maps do look very nice.
Quote:Original post by gazsux
Do you have a link to further info on dark mapping? I couldn't find anything good after a quick Google of it.
Quote:Original post by Ysaneya
Dark mapping
Name is (copyright Ysaneya). I never heard anybody use this technique
I don't think you're expected to find anything. [smile]
I guess I'll ramble about some other techniques, which I can't say if are used or not any more
one is a variation on a depth map, but rather is more of a sorted index map
sort your geometry relative to the lamp
draw it onto a texture such that the furthest away geometry has the lowest alpha value, the nearest has the highest value
then, when drawing the geometry to the screen, draw the texture in additive mode with alpha testing turned on
change each alpha test value for each piece of geometry
if you want self shadows, split concave bodies into numerous convex ones
this technique was mentioned in either the first or second game programming gems books, and iirc a picture was shown of it running on a playstation 2.
it suffers very much from the interpolation of the edges of shadows, but doesn't require any special hardware
an opengl implementation can be seen at the following url:
http://www.daionet.gr.jp/~masa/ishadowmap/ [page in japanese, there is a .txt readme]
the other technique is a variation on stencil shadows, but this time using the alpha buffer instead of the stencil buffer. Not all graphics cards have stencil buffers [the authors noted that they did this because of 3dfx's voodoo cards, but iirc the playstation 2 also lacks a stencil buffer], but by rendering the to a texture instead of the screen, it can be possible to increase framerate by stressing the fill rate less [but also sacrificing the edges of your shadows].
well anyway, so instead of incrementing/decrementing the stencil buffer, you multiply/divide by two [by changing your blend mode] the alpha buffer. Somehow I imagine this would be a problem when you are looking through bunches of shadow volumes, but they didn't note this as an issue at all [perhaps they don't draw all front faces, then all back ones, but rather front than back of each volume?]
google rules! <br><br>http://www9.cs.fau.de/Persons/Roettger/papers/SHADOWS.PDF
one is a variation on a depth map, but rather is more of a sorted index map
sort your geometry relative to the lamp
draw it onto a texture such that the furthest away geometry has the lowest alpha value, the nearest has the highest value
then, when drawing the geometry to the screen, draw the texture in additive mode with alpha testing turned on
change each alpha test value for each piece of geometry
if you want self shadows, split concave bodies into numerous convex ones
this technique was mentioned in either the first or second game programming gems books, and iirc a picture was shown of it running on a playstation 2.
it suffers very much from the interpolation of the edges of shadows, but doesn't require any special hardware
an opengl implementation can be seen at the following url:
http://www.daionet.gr.jp/~masa/ishadowmap/ [page in japanese, there is a .txt readme]
the other technique is a variation on stencil shadows, but this time using the alpha buffer instead of the stencil buffer. Not all graphics cards have stencil buffers [the authors noted that they did this because of 3dfx's voodoo cards, but iirc the playstation 2 also lacks a stencil buffer], but by rendering the to a texture instead of the screen, it can be possible to increase framerate by stressing the fill rate less [but also sacrificing the edges of your shadows].
well anyway, so instead of incrementing/decrementing the stencil buffer, you multiply/divide by two [by changing your blend mode] the alpha buffer. Somehow I imagine this would be a problem when you are looking through bunches of shadow volumes, but they didn't note this as an issue at all [perhaps they don't draw all front faces, then all back ones, but rather front than back of each volume?]
google rules! <br><br>http://www9.cs.fau.de/Persons/Roettger/papers/SHADOWS.PDF
Quote:
I don't think you're expected to find anything.
That is true :) At least under that name.
I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.
Y.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement