Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

duhroach

Shadow mapping theory.

This topic is 5688 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all. I've been studying shadow mapping for about 6 months now, and I have to say, I've gotten no where. I've been able to write a small demo . But have yet to really "grasp" the process. After seeing Yann's screenshot he posted a few threads ago, I'm wondering what the hell I'm missing in the process. It seems that shadow mapping is alot more difficult unless you've tweaked your engine to do some nifty things. Self shadowing is something i can't figure out, and haven't found information on how to do. Another thing I can't understand, is the huge debate between Projected shadow volumes (ala doom3) and shadow mapping. Carmack boasts that ShadowMapping is good under controled input, but not good for general world rendering. Where as Yann has stated that Shadowmapping is more powerful in the long run, and allows more general ability. I guess my biggest beef with the problem is that I can't find things which i Understand about the process itself. I've got about 20 .pdf files explaining, and dozens of demos going over the process, but each one goes over tons of different processes, in tons of different ways. It's quite difficult to figure out which path is the correct to follow. I guess what i'm asking for, is a clairification on the process. does anyone have bare bones, standard, information on shadow mapping? one that doesn't come with a 10 page mathmatics manual? I'm trying to figure out how to make this process quite powerful, but I keep hitting dead ends. can anyone help? ~Main == Colt "MainRoach" McAnlis Programmer www.badheat.com/sinewave [edited by - duhroach on January 21, 2003 1:57:28 PM]

Share this post


Link to post
Share on other sites
Advertisement
It''s not one method or the other, both have their advantages and drawbacks, depending on the situation. They could even be combined. If you tweak your engine, you can get excellent visual quality with both methods. Both automatically handle self-shadowing situations, and both are very robust (shadowmaps inherently; stencil shadows, if vertex shader extrusion is used).

There''s also a little personal preference involved here, Carmack prefers stencil shadows, I like shadowmapping more. It would be fun to discuss this issue with him, although I think he doesn''t visit this site too often, unfortunately

The biggest drawback of stencil shadows are the huge fillrate requirements, and the additional faces required to render the volume (the amount can be overwhelming, when used on complex geometry). They are better used on dark, and highly occluded environments (like in Doom3). On the other hand, stencil shadows are supported on pretty much any 3D card in use, although performance may vary. They are also very easy to implement. Unfortunately, stencil shadows can show some artifacts at the silhouette edges of geometry.

Shadowmaps are imagespace methods. They don''t need additional faces, nor additional fillrate. They will require an additional light update pass, though. They scale very well on large scale scenes. But being an imagespace method, they are prone to some aliasing artifacts. Their biggest drawback is that hardware shadowmapping support is only given on more recent cards. But that might become less of an issue a year from now.

It depends mostly on your engine and level design, to choose the method that would be most appropriate for you.

There are a lot of tricks you can use to make shadowmaps better. Better in this context means higher visual quality and less artifacts. Perspective shadowmaps, second depth maps, etc, are such methods. But you should first understand the technology behind basic shadowmapping before attempting more complex systems.

You said you have read lots of papers about the technique. So I guess linking to some more will probably not be very helpful What part exactly do you have trouble understanding ?

/ Yann

Share this post


Link to post
Share on other sites
Yann, thanks for the reply. That help clear up some differences.

Some of the key problems I'm having trouble with:
1. Self shadowing. I'm using what some papers call "item buffer" method. That is, I render all visible objects for the light, then all vis objects for the view. This creates only the projected shadow on a seperate region, however it does not shadow any other objects that were rendered during that pass. (including it's own geometry) This is based on Kawase's demo What am i doing wrong? Or how can I fix this? I haven't found a paper which specificly answers this question. (More so papers saying "Yea, it's supported..")

2. How things get compared. I understand you render from the light, and from the eye. Then, using some process, you "compare" the two images (which are depth maps, or luminance) and the resulting image is filled with 0's & 1's (or a range of values in deep shadow maps) determining if the pixel is visible from the light. In Kawase's demo, he uses a glTexEnvi function to do the comparisions, alot of other demos use Nvidia's register combiners. I have no knowlage, nor can find a good listing of what the hell all that does. Is there some sort of reference? Or how exactially is this process done? (code wise)

3. The Projection problem. It's to my understanding, that once the images are compared, the new, resulting image is projected onto the scene from your light. Is this correct? According to Kawase, we disable the current objects rendering, then project the item. Is this the reason I'm getting no self shadowing? I've seen demos that do this entirely in geo space, and a wierd one that did it entirely in texture space. Can you give me a link to a good description of this? Also, Every method i've seen requires the use of Open GL's multitexture function for the projected shadow map. is this 100% essencial?

4. Shadow Combining. With the addition of lights, there's the chance that shadow maps will overlap eachother. However, instead of acting like normal light does, (that is light A can illuminate a shadow cast by object C) it simply places the two texture maps over eachother. How do I get around this? Do i render to a larger shadow map for the entire scene? (possibly using the perspective shadow map technique that you provided)

All in all, this is the tecnhique from what I understand:
1. Render the scene (without texture filling) from the camera into a depth buffer. Store that.
2. Render the scene (without texture filling) from Light(n) into a depth buffer. Store that.
3. Compare image 1 to image 2 via a process (that i don't understand) result will be a binary image 3
4. Project image 3 onto the scene from the camera.
5. Repeat for all other lights.

I'm guessing that i'm quite off.


I know these seem to be large issues, any help you could give would be greatly appriciated. I have to say Yann, after seeing that screen shot, made me realize how powerful this process is.

Thanks

~Main

==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

[edited by - duhroach on January 21, 2003 12:38:32 AM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
From what I understand, the basic idea is:

1. Create screen bit buffer. Bit 1 means pixel is visible from AT LEAST one light, bit 0 means pixel is invisible from ALL lights. Initialize bit buffer to all 0''s.
2. Render scene from camera to DepthBufferC.
3. Render scene from light 1 to DepthBufferL.
4. Loop for all pixels in DepthBufferC.
For each pixel, you know its (Xc,Yc,Zc) (camera projection space coordinates, read Zc from DepthBufferC).
Convert (Xc,Yc,Zc) to the light''s projection space by:
(Xp,Yp,Zp)(light proj space) = (Xc,Yc,Zc) * camera inv proj mat * camera inv view mat * light view mat * light proj mat
Let Z = value in DepthBufferL for pixel (Xp,Yp).
If Zp <= Z, the pixel is visible from this light. Set the bit in bit buffer to 1.
5. Repeat 3, 4 for other lights.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Sorry, I wrote something wrong. It should be:

1. Bit 1 means pixel is in shadow from AT LEAST one light, bit 0 means pixel is visible from ALL lights. Initialize bit buffer to all 0''s.
4. If Zp > Z, the pixel is invisible from this light. Set the bit in bit buffer to 1 (in shadow).

Share this post


Link to post
Share on other sites
You do step 4 how? By glTexEnvi? Register combiners? Actually looping through pixels?

You keep referencing image space, does that mean that this is all done by applying an image to your final render? without scene projection? Is this how self shadowing comes into play?


~Main

==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

Share this post


Link to post
Share on other sites
Here''s the general idea:

Shadow mapping treats the light source as a single light emitting point (this is not correct, but works pretty good). Thus determining shadows is simple: if a point P is visible from the light L, it is NOT in shadow, otherwise it is. Essentially, the light illuminates all points it can see. To perform standard shadowmapping (not the item buffer method, more on that later) you first render the scene from the point of view of the light, writing only to a depth buffer texture. This is your shadow map. Then you render the scene normally and you generate the depth to the light somehow (e.g. you use texgen). Then you set up the combiners/texture environment so that if this generated depth is larger you get the shadowed lighting equation and if it''s less or equal to the depth in the shadow map you get the full effect of the light. Conceptually, if a point is farther away from the light than the closest visible point in the shadow map, it must be BEHIND that point and therefore in shadow.

Item buffers are a different way of doing things. Then you render objects (or primitives if you want self shadowing) into the shadow map and setup the compare so a difference in color triggers the shadow. Was that clearer?

Share this post


Link to post
Share on other sites
Right, so the comparition of the depth maps from the light and the camera generate our shadow map. (that is, saying that if a point is not seen by the light, it''s shadowed) then via texgen/projected textures, we place that shadow map into the scene.


Once again: How is this projection process handled? are we actually projecting onto the scene? I think this is my biggest lack of understanding. Are we actually doing pixel equations on the final rendered scene? (ie plotting pixels once the scene itself has been rendered) or are we projecting a shadow map?

Clearer yes, but not for the concepts i''m having trouble with.

~Main





==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

Share this post


Link to post
Share on other sites
First of all "The shadow map" usually refers to the depth texture rendered from the POV of the light.

Now, shadow mapping doesn''t require you to render the scene in a second pass, you can do it that way, but it''s not required. Imagine you have youre texenv set up so that the coplour OpenGL gets from lighting is multiplies with the texture colour. Really simple stuff. This is running on hw with support for shadow mapping in hardware (i.e. ARB_shadow). You have your shadow map (zsee above) ready to go. You then set up texgen so that the R texture coordinate corresponds to depth from the lightsource. The hw then compares this R-coord with the depth in the depth texture and outputs 0 or 1. This is multiplied with your colour from the ligthing stage which gives you either fully lit pixels, or completely black ones. If you have more complex lighting computations this will need to be broken in to multiple passes but it works basically the same way. You get a value that is 1 if the pixel isn''t in shadow and 0 if it is. Thenyou use this in the combiners, or with blending to get the result you want.

Share this post


Link to post
Share on other sites

1. self shadowing is automatic, if you use a depthmap instead of an itembuffer. When an object is rendered into the depthmap (from the light''s point of view), then it''s first visible layer (the faces pointing towards the light) will be nearest in the depthbuffer. Later on when comparing, this will make sure that the object shadows itself. Note that this can lead to artifacts, because you essentially compare the depths through two different projections. Mathematically, they should be the same, but floating point precision problems will make them slightly different. The fast way to alleviate the problem is using a depth-bias. But this method does not guarantee a good result. A better approach is the ''second depth shadowmap'' technique.

2 + 3. Normally the compare is done while rendering the final scene. You can visualize the whole thing like that: first a view from the light is computed. Simply a zbuffer of the scene, as seen from the lightsource. Every pixel on this map is visible from the light, ie. it is not shadowed. Every non-visible pixel is in shadow.

OK, so you have that depthbuffer from the light''s viewpoint. You save it somewhere in video memory. Now, you start rendering your actual 3D scene, from the camera point of view. With textures, shaders, and all other things you need. The difference to traditional rendering is subtle: the 3D hardware does additional processing on every single fragment that is rendered. This additional processing needs to answer a simple question: would this fragment be visible from the light''s viewpoint ?

This is how it''s done: imagine a scene with two cameras, at different positions. Each camera has it''s projection matrix and it''s own depthbuffer. One camera is your viewpoint, the other one is the light. OpenGL simply renders each fragment on both cameras at the same time. The first camera (your eyepoint) runs through the projection matrix, the depth compare function, etc, as usual. The second camera (the light) runs through the texture matrix (which takes the role of the second camera projection matrix), and through it''s own depth compare (which is given by the ARB_shadow extension).

So basically, shadowmapping is just rendering a scene from two cameras at the same time, and using the results of the light camera to determine the shadowing state of a fragment.

4. Different solutions exist. The easiest is to use several shadow depthmaps, one for each light. Then, you use multiple combiners to compare each fragment against all depthmaps. If you take the ''scene is rendered from two cameras'' idea from above, then this is simply an extension to it: now the scene is rendered from 3, 4 or more cameras at the same time. For each fragment, you will get the exact shadow information for each lightsource. It''s up to you (better, up to your pixelshader) to make something nice from this information.

The perspective shadow map technique is something different. It applies an additional projection on the already projected depthmap. This will increase the depthmap resolution near the viewer. As a result, the shadows get a lot sharper and better looking, even with low depthmap resolution.

/ Yann

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!