So, you want to add shadows into your games. Why? Maybe it’s because it looks cool, maybe you want to brag about it to your friends, or maybe you think it will add to the game atmosphere. Whatever your reason, you’ve definitely come to the right place!
Ok, ready to get started? Well, we’re going to start with the very
basics here. What is a shadow? Webster defines a shadow as "an area that is not or is only partially irradiated or illuminated due to blockage of light by an opaque object," or "the rough image cast by an object blocking rays of light." For our purposes, a shadow is a dark area caused by light being blocked by an object. So why exactly do we need to know this? Simple, if we know what a shadow is, it will be that much easier to create them in our games.
The most obvious solution is presented by the definition of a shadow. We could just cast a bazillion rays from the light source and determine lighting that way. While this may work, it’s computationally expensive (i.e. slow), and it isn’t very easy to implement. So, are you getting sick of definitions yet? Well, you’re in luck; you now know all you need to know to get started.
Ok, so I lied, there is one more thing you have to know before we get started, these algorithms are ordered based on complexity, the easiest algorithms are first, and the more complex ones are at the end. Ok, here we go! Fake Shadows
The simplest way to create a shadow is to add a polygon to the bottom of the object. Since this is best understood visually, you should take a look at figure 2.1 if you don’t understand the concept.
Example of a fake shadow
This is a very simple way of doing things. Unfortunately, it’s very limited in it’s applications. The ground must be flat
, the object has to remain a fixed length above the ground, the object can only rotate around the y-axis
, it doesn’t take into account the direction the light is coming from, and it doesn’t shadow anything other than the floor. But, we’re only getting started, and the next three algorithms are much
better, if a little bit slower and more complex. Vertex Projection
This method is still very simple, but it is considerably better than the last one. In this method, you project each polygon onto the ground. If there is any confusion, look at figure 3.1.
Example of vertex projection
The calculations for this method are incredibly simple, as you can see in listing 3.1
void shadow_poly(Point3D p, Point3D s, Point3D l, num_verts)
for (i=0; i<num_verts; i++)
s[i].x = p[i].x – (p[i].y / l.y) – l.x;
s[i].z = p[i].z – (p[i].y / l.y) – l.z;
(Where s is the shadow vertex, p is the object vertex, and l is the point the light originates from.)
Although this is much better, this algorithm is still pretty simple, and restricted in it’s use. Once again, it only works if the ground is flat
and it still doesn’t shadow other objects. Don’t worry though; there are still two more algorithms! Shadow Z-Buffers
Ok, so we’re ready to get down to the more complex algorithms. This method is not the most complex, but it is probably the most efficient of those discussed here.
Basically, what you do is create two Z-Buffers, one from the viewpoint of the camera, and one from the viewpoint of the light. When it’s time to render, you have to go through several steps for each point. For all of the visible points, you need to map from camera space into light space. Project these light coordinates into 2D, and use x’ and y’ to index into the light-viewpoint Z-Buffer. If the light-space Z coordinate is greater than the Z-Buffer value, then the point is shadowed. See figures 4.1 and 4.2 to clear up any confusion.
Simple scene as viewed from the camera’s point of view
The same scene as viewed from the light’s point of view (notice that there are no shadows visible)
The light viewpoint Z-Buffer would be created before you even began to render the scene from the camera viewpoint. This step would be done just as if you were rendering from the camera viewpoint, the only difference being the position from which you were viewing the scene. After this Z-Buffer creation you have to go through about three separate steps for each point in the camera Z-Buffer.
The first step you go through for each point is probably the hardest, and it’s not very hard. You just need to translate and rotate the point until it is in the correct position. This sounds complex, but it is actually very simple. You just translate and rotate everything by the difference between the camera and the light. The psuedo-code in listing 4.1 is easier to understand, so just take a look at it to clear up any confusion.
point.x += (light.x – camera.x);
point.y += (light.y – camera.y);
point.z += (light.z – camera.z);
point.rot_x += (light.rot_x – camera.rot_x);
point.rot_y += (light.rot_y – camera.rot_y);
point.rot_z += (light.rot_z – camera.rot_z);
project points into 2d
point.screen_x *= LIGHT_X / SCREEN_X;
point.screen_y *= LIGHT_Y / SCREEN_Y;
if point.new_z > light_z_buffer[point.screen_x][point.screen_y] ;
The second step is simple, you just project from 3D down into 2D, just like you normally would. The next thing you do is simply change from the aspect ratio of the screen to the aspect ratio of the light Z-Buffer (probably 256x256, or some other square view port with dimensions that are powers of two).
The last thing you need to do is simply compare the mapped Z value to the value in the light Z-Buffer for the projected coordinates. That’s it; you have just produced accurate shadows that, if they are a bit blocky, look great and take very little effort.
Although it is
the most efficient algorithm, it still has several downsides. The first is that shadows can appear blocky depending on the resolution of the light Z-Buffer. An obvious downside is that the scene takes longer to render due to the fact that you draw it twice. The worst downside, however, is that the light has to be directional. This means that you can’t have point light sources, unless you want to render the scene six
times from the light viewpoint (one for each direction: up, down, left, right, forward, and backwards). As you can imagine, that would make your engine very
slow. Ok, so even this engine has some limitations. However, I’m sure it could be optimized so that you could have point light sources. Shadow Volumes
This method is definitely the most complex of all the algorithms discussed here. It produces very nice results however, but it is incredibly slow for high detail scenes.
This method requires a little bit of explanation. First of all, what the heck is a shadow volume? A shadow volume is an infinite volume of space where light cannot be seen because it is being blocked by a polygon.
Making this volume is probably a lot simpler than you might think. All you have to do is make a vector from the light source to each of the vertices of a polygon (the code for this can be found in listing 5.1). After you have these vectors, all you need to do is normalize them and voila, they combine to form an infinite volume. Clip this infinite volume to the viewing volume, and you have your usable shadow volume. All you need to do now is check to see which polygons are within these volumes, and darken accordingly. As usual, look at figure 5.1 to clear up any confusion. Although checking entire polygons against the volume may be the simplest solution, it may not work for all situations (e.g. large polygons would not be shadowed correctly, leading to bad shadows and a messy looking game engine). However, there is hope. You can clip all of the polygons that pass the first test against the shadow volume. This test requires only a little processor power per polygon, but if you have scenes with high polygon counts, it could seriously slow your game engine. This usually isn’t an issue however, because if you have high polygon counts, then your polygons are probably going to be fairly small. The only exception to this is in scenes where you have tons of low detail models.
Ok, so you don’t like clipping, there is another
alternative that does not involve any clipping or checking against volumes. This alternative even has a name; it’s called a stencil buffer. To use a stencil buffer, you check to see how many shadow polygons a ray from the viewpoint to the point you are checking. If the number of shadow polygons is odd, then the point is in shadow. A problem arises, however, when the viewpoint is in a shadow. Shadowed objects would be lit as if they were not in shadow, and non-shadowed polygons would be shadowed. While this may produce a neat looking effect, it is not what we want to be doing. All we need to do to alleviate this problem is to cap the shadow volume. To do this, we just check to see if the shadow volume passes through the near clipping plane. If it does, then we just need to create a polygon that covers the portion of the near clipping plane that is in the shadow.
Example of a shadow volume (anything inside of it is shadowed)
void create_shadow_volume(Point3D p, Point3D l, ptr_Vector3D ray, _ numPoints)
for (i=0; i<num_points; i++)
ray[i].x = p[i].x - l.x;
ray[i].y = p[i].y - l.y;
ray[i].z = p[i].z - l.z;
Even this algorithm has its downsides. The first is that is that it slows down significantly with complex scenes. It is also incredibly difficult to add another light into the scene. If you have a scene with only one light, fairly simple objects, and you can’t stand having a flat floor; this algorithm is probably the best choice. Combining Algorithms
Although this article presents only different algorithms, I thought it would be nice to let everyone know that many of these algorithms can be combined to form even better ways of doing things. I don’t mean just supporting several of them and using different algorithms for different situations. What I’m talking about is combining two or more algorithms into a single algorithm. For example, the Shadow Volume
algorithm and the Shadow Z-Buffer
algorithm can be combined into a single algorithm. Although this has
been done already, there are more combinations possible. Another possibility would be to use the vertex projection technique on uneven surfaces; it would just take a little effort and ingenuity. Seeing as how it would be difficult to imagine combining these algorithms without an example, so I will present you with the combination shown above, the new algorithm is called Shadow Volume Reconstruction
The basic algorithm is that you create a Z-Buffer from the point of view of the light (just like in the Shadow Z-Buffer
algorithm). After that you run the light Z-Buffer that you just produced through an edge filter. This edge filter is then used to produce a silhouette polygon. The silhouette polygon is then used to create a shadow volume. This may seem a little excessive, but the results do look smoother than the usual Shadow Z-Buffer
algorithm, and it is easier and faster than the Shadow Volume
Ok, now that you know the basics of the method, it’s time to go into a little depth. There are seven general steps to the Shadow Volume Reconstruction
method. The first step is to render the shadow z-buffer. The next step is to render the scene from the standard camera view. The third step is to reset all of the buffers (the stencil buffer, the depth buffer, etc.). After resetting the buffers, you need to create the silhouette image. This can be done with a simple edge detection algorithm (see listing 6.1). By using the silhouette image you just created, you can now reconstruct the shadow volume. All that is left to do after creating the shadow volume is cap it, and then darken the shadowed pixels.
void edge_detect (int u, int v)
int x, y;
for (x=0; x<SCREEN_WIDTH; x++)
for (y=0; y<SCREEN_HEIGHT; y++)
u[x, y] = (|Dx z[x, y]| > q)
v[x, y] = (|Dy z[x, y]| > q)
Dx z[x, y] = |z[x, y] – z[x + 1, y]|
Dy z[x, y] = |z[x, y] – z[x, y + 1]|
The first step is simple; you’ve done it before. You just render from the light’s point of view like you did in the Shadow Z-Buffer
algorithm. The second step also follows the Shadow Z-Buffer
algorithm. You just create the z-buffer as you normally would when rendering any scene. The next step is simply to make sure the data is correct. You reset the stencil buffer (this is used to determine whether an object is in shadow or not) and disable writing to the screen and z-buffers. The fourth step is when we start to deviate from the Shadow Z-Buffer
method. What we need to do now is run the shadow map through an edge detection filter. This is fairly simple and doesn’t require an overly complex filter (an example of a suitable filter is presented in listing 6.1). Now is when things start to get a little complicated. What we need to do now is create a shadow volume based on the silhouette image. This can be done in two different ways. The first approach is to generate the parts of a mesh that touch at least one silhouette edge (this sounds confusing so take a look at figure 6.1). The second method is similar, only it uses a lookup table to determine what the mesh should look like. The lookup table would simple contain the 16 possible combinations of four adjacent pixels. This is also a confusing concept, so look at figure 6.2 to better understand what is going on.
Generating the mesh – Approach I
Generating the mesh – Approach II
Ok, so now that you have seen the algorithm, you might wonder what makes it better than any of the other algorithms. Well, this algorithm is a compromise; it doesn’t have the speed of the Shadow Z-Buffer
method. However, it looks smoother, but not as smooth as the Shadow Volume
method. It is an in-between, and it was intended to show that Shadow Volumes
and Shadow Z-Buffers
are similar and have different strengths that can be combined in a hybrid. Conclusion
So, now that you have seen several different algorithms, and even a hybrid, you are ready to create your own shadows. These algorithms are not the only choices, the possibilities are endless, new algorithms could be created, or older algorithms could be combined. The sky is the limit.
Experiment! Only if you dare to be different and create your own methods will you make progress. Even if your method is not as fast or accurate, it is more than likely that it can be used as a stepping stone, for yourself and others, to make real-time shadows more accurate. So, don’t just sit there! Explore, create, expand, EXPERIMENT!
If you are inspired by this article and create a brand new algorithm, or just have an existing one you would like to let me know about, please send me an e-mail to let me know about it. Also, if this article was helpful to you in any way or you have any questions, suggestions, or comments, feel free to e-mail me at firstname.lastname@example.org