Shadowing Techniques

Started by
24 comments, last by Mercury 19 years, 4 months ago
Quote:
Lightmapping is obsolete now; it simply does not work well with per-pixel lighting which any state-of-the-art engine must have today


That's not entirely true. Half-Life 2, as far as I know, uses Radiosity Normal Mapping, which combines lo-res Radiosity maps with Hi-res Normalmaps, and the results are pretty good.
Advertisement
True, but then it's no longer "light mapping" (as used in Quake/Unreal).

Y.
Quote:Original post by Ysaneya
I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.

Y.


I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.

In addition, I'm rendering black dynamic casters into dest alpha before the lighting pass to support dynamic shadows.

So, the rendering goes :

Render per-light occlusion maps ( which in my case include per-pixel attenuation modulated in as well ) into dest alpha, render any dynamic casters modulated into dest alpha, then blend in per pixel diffuse & specular lighting on top. This method supports any # of lights, and is fairly fast. It is ~30 fps on a gf4 4200 with 5 lights and dynamic shadows turned on.

Here is an example screenshot, a few weeks old.
Quote:Original post by mikeman
Quote:
Lightmapping is obsolete now; it simply does not work well with per-pixel lighting which any state-of-the-art engine must have today


That's not entirely true. Half-Life 2, as far as I know, uses Radiosity Normal Mapping, which combines lo-res Radiosity maps with Hi-res Normalmaps, and the results are pretty good.

correct, most [current] games use lightmaps of some sort

the source engine is weird in that it stores 3

each of these lightmaps are the light on the surface, but coming from a particular direction. They have a shader that allows them to combine a high detail normal map with their low detail static lighting.

I don't have a screenshot handy [or a place to upload it to], but you can see this on some walls in cs:source where there is a lamp just a bit away from the wall, and you can see the bumps on the wall being made visible by the lamp.

I'd imagine having static lightmaps stored this way could also help them make some nice dynamic shadows, but the dyanmic lighting is not very good in the cs:source engine. all shadows are simply projected blurred rendered-to-texture shilouettes [which aren't blended to overlap properly], and a character [well, anything with dynamic lighting] cannot be partially in static shadow and partially not.
Quote:Original post by Ysaneya

Lightmapping is obsolete now; it simply does not work well with per-pixel lighting, which any state-of-the-art engine must have today.

Y.


Could you explain some of the problems that you have found with per-pixel lighting and lightmaps.

Quote:
The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.


I can see how this lets you turn of/on the light, but how can you change the parameters of the light, by the help of shaders?

And also is the dark-map calculated by some kind of radiosity solution?

Lizard
Quote:Original post by SimmerD
Quote:Original post by Ysaneya
I seriously doubt i've invented something original. If i remember well, Carmack even used something like that in the original quake (he didn't encode colors, but only brightness in his lightmaps) - so somewhere i'm proposing a regression. The only little difference is, instead of using a single lightmap for all the lights affecting a polygon, is to separate these lightmaps "per-light", which in turn allows you to turn on/off lights separately, or even to change some light parameters (diffuse, specular colors, brightness, etc.. ). Flickering lights should be no problem. The only remaining restriction is that lights still cannot move.

Y.


I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.

Quickie questions for either of you ('cos I'm always interested in new shadow stuff [grin]):
- Hows the memory use compared to vanilla lightmapping? I'd expect an increase, but I assume its managable?
- How do you pull off the lighting for your moving entities? Project your per-light lightmaps vertically or something like Q3's light grid?
- Is that a hard shadow on your dynamic object? Have you tried using a projected shadow texture rendered into the dest alpha (I love using dest alpha for these effects, shame it tends to be quite slow compared to normal rendering).
Quote:Original post by OrangyTang

I use this technique in my engine, originally described in 'Modern Graphics Engine Design'. I was using per vertex shadows at the time of the talk, but now I'm doing what you describe above.
Quickie questions for either of you ('cos I'm always interested in new shadow stuff [grin]):
- Hows the memory use compared to vanilla lightmapping? I'd expect an increase, but I assume its managable?
- How do you pull off the lighting for your moving entities? Project your per-light lightmaps vertically or something like Q3's light grid?
- Is that a hard shadow on your dynamic object? Have you tried using a projected shadow texture rendered into the dest alpha (I love using dest alpha for these effects, shame it tends to be quite slow compared to normal rendering).


1) Well its not too bad, b/c the light color is factored out of the occlusion*attenuation maps, so it's a single color. You could either store 4 in a 32 bit rgba texture, or just have separate A8 or DXT5 textures with just an alpha channel ( my plan ). However, it's definitely some cost b/c of one lightmap per light.

2) I do many raycasts ( ~9 ) towards each light every frame, and just use that entire amount over the entities. Since they are relatively small, it looks pretty good. Another approach I haven't done is to do 3 raycasts from 3 different points in a horizontal triangle around the entity's head. Then interpolate for each vertex, between these 3 weights in the vertex shader. This would allow half/half lighting effects.

3) Yes, it's a pretty hard shadow, although I am using bilinear filtering on the map right now. It is first rendered to a texture page offscreen with all other shadows for that light, then rendered to dest alpha, then the lighting is applied. I could certainly blur the entire shadow map, or just sample the shadow at 4 offsets to make it a bit softer.

Why do you say dest alpha rendering is slow? Slower than other forms of alpha blending? I haven't found this myself...
Soft shadows with penumbra wedges:
- calculate silhouette edge and render hard shadow as with shadow volumes
- for each silhouette edge, calculate maximum penumbra wedge from light source
- render the insides of the penumbras using a pixel shader that computes the edge's coverage of the light source, add or substract to the shadow depending on whether point is on the light side or shadow side
This is very slow, requires both a lot of rasterizing (the penumbra wedges require tons of rejected pixels) and geometry processing/computing (the wedges).
Quote:Original post by SimmerD
Why do you say dest alpha rendering is slow? Slower than other forms of alpha blending? I haven't found this myself...

I used pretty much the same method (load light intensity into dest alpha, modulate all drawing by it) and my general impressions were:

- Rendering to *only* the dest alpha was somewhat slower (I assumve because you're needing to leave the intermingled RGB untouched, and that its pretty uncommon).
- The actual modulation of the scene geometry by the dest alpha (via blending) was the real bottleneck. On the cards I was using (GF2 & GF4) the massive amount of reading from the framebuffer hits the maximum amount of memory bandwidth and slow the whole thing down.

The end result was *really* sensitive to the resolution used. 800x600 was just about smooth on a GF4 if I remember right.
Rendering to dest alpha only will be slower than regular RGBA rendering, as you say, because the hw has to read the old value, mask off rgb, and then write out rgba, so you are effectively blending, which is often slower than not blending.

I myself am getting ~35 fps at 1024x768 on a gf4 4200, which is right about my target framerate. Hopefully it won't slow down too much as I add more features... ;)

This topic is closed to new replies.

Advertisement