Sign in to follow this  
Shannon Barber

Shadowing Techniques

Recommended Posts

I've been trying to put together the various techniques for rendering shadows, thier cost, inaccuracies, and the required hardware support to make them viable. For the moment, I'm focusing just on shadows, so something like bump-mapping isn't listed. Any corrections or additional techniques? And does the double-fail (Carmack's Reverse) stencil-buffer technique perform self-shadowing? Technique Implementation Inaccuracy Hardware Cost Ground-Plane Shadows Rerender using grey [color] and plane-projection matrix Only cast shadows onto flat axis-aligned planes Hardware z-buffer Additional rendering of shadow-casting objects Light Maps Pre-calculated textures Pre-computed, not dynamic Alpha-blending w/multiple-passes or multi-texturing One additional texture/pass (for all lights) Attenuation Maps Pre-calculated LUT textures Basic dynamic shadow, only works well for point-sources Alpha-blending w/multiple-passes or multi-texturing Additional texture/pass per light Shadow Volume "Carmack's Reverse" Produces correct shadows Stencil Buffer (how many bits? 8+?) Extra scene render per 'shadow' (light) source, plus additional pass for viewed scene Shadow Volume Projected shadow mapping Requires sufficent vertex density to produce correct shadows Hardware texture matrix accel, render-to-texture Additional texture w/ multi-texruting or multiple-passes with subtractive blending plus computational-intensive CPU-based shadow-volume/target-vertex "collision" test Soft Shadows ? ? Pixel-shader ?

Share this post


Link to post
Share on other sites
The "ground plane shadows" i believe are what are refered to as "shadow maps" and are certainly not limited to axis aligned planes, remember, you can transform anything into any space, it would not be hard to transform a texture like that onto an arbitrary triangle

correct me if im wrong but im pretty confident

-Dan

Share this post


Link to post
Share on other sites
No, the ground-plane shadow flattens a mesh into a planar shape, and then renders it with a gray or transparent black color.

Also, attenuation maps and vertex lighting don't produce shadows at all. They represent a factor of the lighting equation, not light occlusion information.

You missed out:

Shadow mapping
Render depth information for each light, then render scene with depth information projected onto geometry, doing a pass/fail test for occlusion
Suffers from aliasing artifacts due to limited texture resolution
Requires shader hardware
1 render per light per relevant update (can be cached while scene is static)

Share this post


Link to post
Share on other sites
ZFail stencil (aka Carmack's reverse) does handle self shadowing.

The topic of shadows is IMO the most complex one in graphics programming. Here are a couple of additional details you might be interested in:

Projected shadows
The idea is relatively similar to shadow mapping but does not handle self shadowing, but it does not have any particular requirement.
You assign a texture to an object, and every time you need to update the shadow, you clear up the texture to white and render your object in black into this texture. Then at render time you project this texture into the scene, but only on the part of the scene inside the shadow frustum.

Dark mapping
Name is (copyright Ysaneya). I never heard anybody use this technique but in my engine i will mostly rely on this one, which is an adaptation of Lightmapping. Lightmapping is obsolete now; it simply does not work well with per-pixel lighting, which any state-of-the-art engine must have today. Dark maps are similar to lightmaps except that:
- they do not encode a color, but a "grayscale" brightness.
- there is one dark map per object and per light.
At run-time, you have one pass per light. When you paint the objects affected by your light, you render them using per-pixel lighting as usual but you modulate the end result by the dark map. This allows you to have static lights with soft shadows, it's pretty fast and does not have any particular requirement, but you can no longer use radiosity or global illumination solutions. Another advantage is that you can turn lights on/off independantly of each other.

Shadow mapping
Probably the best technique to handle dynamic shadows, as stencil shadows don't scale very well to high polycounts. However shadow mapping is also IMO the hardest one to implement "correctly".

- if you want to render point lights, you'll have to use a cube shadow map. Unfortunately our video cards don't support depth cube textures, so you must cheat by using a pixel shader. You can use a floating point render cube texture, or an RGBA cube texture in which you encode/decode the depth into 4 bytes in a pixel shader. It works but it's tricky. That's what i used in this image:



- to get ride of artifacts near the occluder, you must use second-depth shadow mapping. This means actually rendering into two shadow maps, and averaging the depth in a pixel shader.

- you cannot get ride of the aliasing artifacts at the shadow boundary. You can minimize them by increasing the shadow map resolution, but even at 1024 or 2048, you will have troubles rendering a light at a good quality after 10 meters. Techniques like perspective shadow maps or trapezoidal shadow maps can help but they are very tricky (and do not give good results in all cases) and require that you update your shadow map every frame (while with standard shadow mapping you can cache your results).

- to lessen aliasing you can soften your shadows. You take N samples an average them. In addition to NVidia's PCF, this can give pretty good results, at the cost of speed obviously. The best solution for soft shadows is to actually use pixel shader 3 with a conditional: you take 4 samples, if they are all white or black, you simply average them; else you take an additional amount of samples (like 60 other random samples) and average them. Sampling 64 textures is obviously a performance killer but you only need this in the shadow penumbra, which is likely to be small in screen space..

Y.

Share this post


Link to post
Share on other sites
Quote:
ZFail stencil (aka Carmack's reverse) does handle self shadowing.


I'd like to elaborate on that subject:

Yes, the typical stencil shadow solution (ZPass and Zfail) support self-shadowing, but Doom3, the most prominent stencil shadow user turns off a lot of self-shadowing because of one large flaw. That flaw is that when a polygon's normal just goes past the point where it's orthogonal to the light source, it goes full black even if it should still be in the light. However, Doom3 kind of wimped out by turning off a lot of self shadowing flags (basically not rendering the object to the master depth buffer used to determine if a pixel is shadowed or not) when a far more visually appealing solution would have been to use depth biases on the shadow volumes. Not only does this retain self-shadowing, but removes the near-orthogonal problem.


Anywyas, back to shadow maps. I think the most practical solution to anti-aliasing the shadow map is multisampling. The Unreal3 engine apparently uses 16x sampling, and in the videos that have been released, the shadows (at least those that use shadow maps) are, needless to say, virtually flawless. I've tried as hard as I can to find any problems with the shadows, and come up blank.

Also, for shadow maps on a point light, if you're using a cube depth map (through some render-to-texture), you have to keep in mind that if an objects passes the boundaries of each face it needs to be rendered again for the other faces, or else you'll have some pretty...interesting shadows.

Lastly, in regards to PSMs, I haven't seen too many implementations of them (actually, only one in motion: the one 3DMark05 test with the airship) and needless to say,I think they absolutely suck. I saw a lot of flickering in the shadows in the 3DM05 test, and from what I hear you can get some odd white speckles in a PSM implementation if polygons get angled to the light correctly.

So, in the next year or so while shadow maps become 'perfectly' implemented, I think things will be very interesting, but in the end very rewarding because shadow maps do look very nice.

Share this post


Link to post
Share on other sites
Quote:
Original post by gazsux
Do you have a link to further info on dark mapping? I couldn't find anything good after a quick Google of it.


Quote:
Original post by Ysaneya
Dark mapping
Name is (copyright Ysaneya). I never heard anybody use this technique


I don't think you're expected to find anything. [smile]

Share this post


Link to post
Share on other sites
I guess I'll ramble about some other techniques, which I can't say if are used or not any more

one is a variation on a depth map, but rather is more of a sorted index map

sort your geometry relative to the lamp
draw it onto a texture such that the furthest away geometry has the lowest alpha value, the nearest has the highest value
then, when drawing the geometry to the screen, draw the texture in additive mode with alpha testing turned on
change each alpha test value for each piece of geometry

if you want self shadows, split concave bodies into numerous convex ones

this technique was mentioned in either the first or second game programming gems books, and iirc a picture was shown of it running on a playstation 2.

it suffers very much from the interpolation of the edges of shadows, but doesn't require any special hardware

an opengl implementation can be seen at the following url:
http://www.daionet.gr.jp/~masa/ishadowmap/ [page in japanese, there is a .txt readme]


the other technique is a variation on stencil shadows, but this time using the alpha buffer instead of the stencil buffer. Not all graphics cards have stencil buffers [the authors noted that they did this because of 3dfx's voodoo cards, but iirc the playstation 2 also lacks a stencil buffer], but by rendering the to a texture instead of the screen, it can be possible to increase framerate by stressing the fill rate less [but also sacrificing the edges of your shadows].

well anyway, so instead of incrementing/decrementing the stencil buffer, you multiply/divide by two [by changing your blend mode] the alpha buffer. Somehow I imagine this would be a problem when you are looking through bunches of shadow volumes, but they didn't note this as an issue at all [perhaps they don't draw all front faces, then all back ones, but rather front than back of each volume?]

google rules! [I found the paper, or a paper I had seen on the topic]

http://www9.cs.fau.de/Persons/Roettger/papers/SHADOWS.PDF

Share this post


Link to post
Share on other sites
Quote:

I don't think you're expected to find anything.


That is true :) At least under that name.

I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.

Y.

Share this post


Link to post
Share on other sites
Quote:

Lightmapping is obsolete now; it simply does not work well with per-pixel lighting which any state-of-the-art engine must have today


That's not entirely true. Half-Life 2, as far as I know, uses Radiosity Normal Mapping, which combines lo-res Radiosity maps with Hi-res Normalmaps, and the results are pretty good.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya
I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.

Y.


I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.

In addition, I'm rendering black dynamic casters into dest alpha before the lighting pass to support dynamic shadows.

So, the rendering goes :

Render per-light occlusion maps ( which in my case include per-pixel attenuation modulated in as well ) into dest alpha, render any dynamic casters modulated into dest alpha, then blend in per pixel diffuse & specular lighting on top. This method supports any # of lights, and is fairly fast. It is ~30 fps on a gf4 4200 with 5 lights and dynamic shadows turned on.

Here is an example screenshot, a few weeks old.

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
Quote:

Lightmapping is obsolete now; it simply does not work well with per-pixel lighting which any state-of-the-art engine must have today


That's not entirely true. Half-Life 2, as far as I know, uses Radiosity Normal Mapping, which combines lo-res Radiosity maps with Hi-res Normalmaps, and the results are pretty good.

correct, most [current] games use lightmaps of some sort

the source engine is weird in that it stores 3

each of these lightmaps are the light on the surface, but coming from a particular direction. They have a shader that allows them to combine a high detail normal map with their low detail static lighting.

I don't have a screenshot handy [or a place to upload it to], but you can see this on some walls in cs:source where there is a lamp just a bit away from the wall, and you can see the bumps on the wall being made visible by the lamp.

I'd imagine having static lightmaps stored this way could also help them make some nice dynamic shadows, but the dyanmic lighting is not very good in the cs:source engine. all shadows are simply projected blurred rendered-to-texture shilouettes [which aren't blended to overlap properly], and a character [well, anything with dynamic lighting] cannot be partially in static shadow and partially not.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ysaneya

Lightmapping is obsolete now; it simply does not work well with per-pixel lighting, which any state-of-the-art engine must have today.

Y.


Could you explain some of the problems that you have found with per-pixel lighting and lightmaps.

Quote:

The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.


I can see how this lets you turn of/on the light, but how can you change the parameters of the light, by the help of shaders?

And also is the dark-map calculated by some kind of radiosity solution?

Lizard

Share this post


Link to post
Share on other sites
Quote:
Original post by SimmerD
Quote:
Original post by Ysaneya
I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.

Y.


I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.

Quickie questions for either of you ('cos I'm always interested in new shadow stuff [grin]):
- Hows the memory use compared to vanilla lightmapping? I'd expect an increase, but I assume its managable?
- How do you pull off the lighting for your moving entities? Project your per-light lightmaps vertically or something like Q3's light grid?
- Is that a hard shadow on your dynamic object? Have you tried using a projected shadow texture rendered into the dest alpha (I love using dest alpha for these effects, shame it tends to be quite slow compared to normal rendering).

Share this post


Link to post
Share on other sites
Quote:
Original post by OrangyTang

I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.
Quickie questions for either of you ('cos I'm always interested in new shadow stuff [grin]):
- Hows the memory use compared to vanilla lightmapping? I'd expect an increase, but I assume its managable?
- How do you pull off the lighting for your moving entities? Project your per-light lightmaps vertically or something like Q3's light grid?
- Is that a hard shadow on your dynamic object? Have you tried using a projected shadow texture rendered into the dest alpha (I love using dest alpha for these effects, shame it tends to be quite slow compared to normal rendering).


1) Well its not too bad, b/c the light color is factored out of the occlusion*attenuation maps, so it's a single color. You could either store 4 in a 32 bit rgba texture, or just have separate A8 or DXT5 textures with just an alpha channel ( my plan ). However, it's definitely some cost b/c of one lightmap per light.

2) I do many raycasts ( ~9 ) towards each light every frame, and just use that entire amount over the entities. Since they are relatively small, it looks pretty good. Another approach I haven't done is to do 3 raycasts from 3 different points in a horizontal triangle around the entity's head. Then interpolate for each vertex, between these 3 weights in the vertex shader. This would allow half/half lighting effects.

3) Yes, it's a pretty hard shadow, although I am using bilinear filtering on the map right now. It is first rendered to a texture page offscreen with all other shadows for that light, then rendered to dest alpha, then the lighting is applied. I could certainly blur the entire shadow map, or just sample the shadow at 4 offsets to make it a bit softer.

Why do you say dest alpha rendering is slow? Slower than other forms of alpha blending? I haven't found this myself...

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Soft shadows with penumbra wedges:
- calculate silhouette edge and render hard shadow as with shadow volumes
- for each silhouette edge, calculate maximum penumbra wedge from light source
- render the insides of the penumbras using a pixel shader that computes the edge's coverage of the light source, add or substract to the shadow depending on whether point is on the light side or shadow side
This is very slow, requires both a lot of rasterizing (the penumbra wedges require tons of rejected pixels) and geometry processing/computing (the wedges).

Share this post


Link to post
Share on other sites
Quote:
Original post by SimmerD
Why do you say dest alpha rendering is slow? Slower than other forms of alpha blending? I haven't found this myself...

I used pretty much the same method (load light intensity into dest alpha, modulate all drawing by it) and my general impressions were:

- Rendering to *only* the dest alpha was somewhat slower (I assumve because you're needing to leave the intermingled RGB untouched, and that its pretty uncommon).
- The actual modulation of the scene geometry by the dest alpha (via blending) was the real bottleneck. On the cards I was using (GF2 & GF4) the massive amount of reading from the framebuffer hits the maximum amount of memory bandwidth and slow the whole thing down.

The end result was *really* sensitive to the resolution used. 800x600 was just about smooth on a GF4 if I remember right.

Share this post


Link to post
Share on other sites
Rendering to dest alpha only will be slower than regular RGBA rendering, as you say, because the hw has to read the old value, mask off rgb, and then write out rgba, so you are effectively blending, which is often slower than not blending.

I myself am getting ~35 fps at 1024x768 on a gf4 4200, which is right about my target framerate. Hopefully it won't slow down too much as I add more features... ;)

Share this post


Link to post
Share on other sites
Quote:
Shadow mapping
Render depth information for each light, then render scene with depth information projected onto geometry, doing a pass/fail test for occlusion
Suffers from aliasing artifacts due to limited texture resolution
Requires shader hardware
1 render per light per relevant update (can be cached while scene is static)


Actually shadow mapping can be done without shaders, using multitexture and gl_texture_env_combine. Here's a demo I found that impliments this technique on older hardware.
[url]http://www.paulsprojects.net/opengl/shadowmap/shadowmap.html[/url]

Share this post


Link to post
Share on other sites
I wish I could mention the technique I've been working on for the last few months... but since I'm trying to get an article published on it, I probably shouldn't just yet. Stay tuned! [smile]

Share this post


Link to post
Share on other sites
Perhaps you and others may find the following useful - http://www.rasterise.com/links.html

It is a list of shadowing papers and articles that I have found over the last few weeks whilst doing research for my project. All were found on Google and stuff but it should save you some time :)

I've found http://scholar.google.com to be useful for filtering out the rubbish :)

Share this post


Link to post
Share on other sites
Speaking of Soft Shadows, did anybody see (or implement) the technique NVidia used for their Dawn demo ?

They are using shadow mapping, but instead of applying the shadow map onto the geometry in world space, they project it in texture (aka lightmap-like) space, and then blur this lightmap. The amount of blurring can even be made dependant of the pixel-to-occluder distance since you got the depth info from the shadow map.

Then they are applying this lightmap like you'd do with a normal one over the geometry - except that it's dynamically generated.. it's also very efficient since you can reuse the same lightmap for many frames, so it's less expensive that antialiasing the shadow map.

Finally because the blur is done in texture space instead of eye space, you do not have this "halo" artifact between lit/unlit areas.

Y.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this