Real-Time Radiosity Part II

Published July 05, 2000 by Nathan Vegdahl, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement
[size="5"]Introduction

After writing the previous Real Time Radiosity article, I received several e-mails expressing confusion about the method. I realized rather quickly that I had spent too much time writing about the history of of the method's development, and that I hadn't delved enough into the actual method. Therefore, I have decided to write an article solely for the purpose of explaining the methods of real time radiosity, and not for any history lessons ;)

Now, I say "methods" (plural) because, since the original article's release, I have also developed two other methods that deal with the radiosity of static objects. When all three methods are combined, they create a global radiosity solution.


[size="5"]Radiosity Mapping

The theory behind radiosity mapping is that you should be able to approximate the radiosity of an entire object by pre-calculating the radiosity for a single point in space, and then applying it to every other point on the object. The reason that this works is because points in space that are close together all have approximately the same lighting. You can very easily see that this is true by merely looking at two points on a real-life object that are about a half inch apart. You will notice that both points have very close to the same lighting.

But what about points that are far apart from each other? Well, then you would have to calculate a different radiosity solution for each of them, because their lighting values are not similar enough. And that is one of the little quirks of this method: it can only be used on small objects. Which means that walls, floors, etc. are out of the question. The only things that this particular method will be good for are objects such as people, weapons, red markers (inside joke for those of you who read my last article), etc.

Now that we have all of that sorted out, there are still two rather large questions looming over our heads: how do we calculate and store the radiosity of a single point in space, and how do we use the single points pre-calculated radiosity to approximate the radiosity of the other points on the object?

Well, the first of those questions is rather easy to answer: a cube view. For those of you who do not know what a cube view is, I will explain. If you already know what a cube view is, then you may skip the next paragraph.

A cube view is actually a very simple concept. It is merely six renderings of the scene from a single point in space. One of those renderings is rendered with the view looking up. Another of the renderings is rendered from the view looking down. Another is left, another is right, another is forward, and another is backward. When all of the six renderings are put together, so that their edges line up, they form a cube. And that cube just happens to be a full 360-degree view of the entire scene from a single point in 3D space! Also, each of the views must be rendered with a fov (field of view) of 90 degrees. Otherwise, if the fov is greater than 90, the views will overlap in what they render, or if it is less, they will have gaps between them. To get a fov of 90 degrees, use the following formula for perspective calculation: xo = (x / z) * one_half_screen_res where xo is the x that is output. Also, use the same formula, except for the y value. "one_half_screen_res" is one half of a single dimension of the screen resolution. For instance, if you were to have a screen res of 256x256 for each side of the cube view, then one_half_screen_res would be equal to 128 (which is 256 / 2). Another thing you must make certain of is that each side has a resolution of equal width and height (i.e. 256 by 256 is ok, but 320 by 240 is not). If it is not, then it will not be a cube view, and the images will not "fit" correctly.

So, the way you store the radiosity information of the single point in space, is you render a cube view from that point. Of course, that raises the question of what point in space we should render the cube view from. Well, that is actually quite simple: the center of the object. And, to get the center of the object, all you need to do is find the average position of all of its vertices. And, depending on the number of vertices in your object, it may be faster to pre-calculate that point, or calculate it on the fly. The difference between the two is that if you pre-calculated it, you would have to rotate it and translate it along with the rest of the object. Considering how many polygons are typically in 3D models for games, I'd say that more often than not, it would be faster to pre-calculate the point, and rotate/translate it with the rest of the object.

One other thing that I'd hope is a given (but people often surprise me), is that you should ignore the object that your pre-calculating the radiosity for when you render the cube view. Otherwise, you will just get the inside of your object when you render the cube view.

So, now you've got this cube view for your object. But what do you do with it? Well, this is the hard part.

The basic idea behind it is that we want to treat every pixel of the cube view as if it were a light source, because the entire point of radiosity is that objects contribute light to other objects. As we know, the light contribution of a light source fades with both distance, and with angle. So to deal with the first one, we keep the z-values of each pixel, and lower it's contribution to the light based on how far away it is, right? Wrong. The light fade is already taken into account, because the further away the object is from the point in space, the smaller it is on the cube view (because of perspective), which means that it has less global contribution. So, guess what! We don't have to do anything more, as far as fading with distance goes.

But there is still the problem of the angle. In theory, one thing you could do would be to draw a vector to each pixel of the cube view from the center of the cube (since that's where the point in question is assumed to be). You could then use the dot product between those vectors and the surface normal to find the contribution of each pixel. However, in practice that's going to be much too slow. Even if you made all of the vectors in advance, you would still have the problem of the dot product (in fact, the dot product is the main problem).

Well, since dealing in 3D space is too slow, why don't we deal in 2D space? The only problem is: how? How could we calculate the fade from angle in 2D space? It just so happens that we have the perfect tools for this: skipping pixels, and averaging.

Now, I'm certain that all of you are thinking "He's insane," or at least, "What is he talking about?"

First, I will tell you what to do, and then I will tell you why it works.

First, what we must do is to create a polar pattern where, for every distance from the center, there are four pixels selected. Of course, when I say a "polar" pattern, I don't mean that it should be represented in a polar method; you should still store it as pixels. I merely mean that you should make the pattern by assuming circles coming from the center of the image.

A very basic pattern of the type I am talking about would be the following:

Image35.gif


Note that in this example, black pixels are the ones that are to be sampled, and white ones are the pixels to be ignored.

As you can see, there are four sampled pixels for each distance away from the center of the image. However, it is very localized sampling. Or rather, it only samples from the four off shoots from the center, which means that objects that are only in the quadrants will not be sampled (which is not good). Therefore, we must disperse the points more evenly, so that they cover more area, while still maintaining the four-pixels-per-distance rule. An example of a more evenly dispersed pattern would be this:

Image36.gif


Obviously, that is not the only dispersed pattern that you could use. In fact, it's probably not even the best one, seeing as it has a consistent shape to it. It would probably be best to have the program randomly generate a pattern when it starts from the rules I described above, and then use that pattern for the rest of the program running time; that would create a much more well-dispersed sampling pattern.

Of course, this entire pattern thing brings up a rather strange question: what do we do with the pattern? Well, what we do is we find the pixel of the cube view that the surface normal of the given vertex intersects. We then assume that as being the center of the place where the pattern is to be mapped. Then, wherever there are black pixels mapped onto the cube view, we sample the underlying pixel of the cube view. Once we have all of those pixels, we average the colors of all of the sampled pixels together. And, guess what? We now have a base radiosity lighting value!

But how do we apply the pattern to the cube view to determine the pixels to sample? We can't just map it on, because if we do, it will have missing sections (from the cube view's corners), right? Well, yes, it would have missing sections from the corners of the cube, but that doesn't mean that we can't use it! Think about it, the ratio of 4 pixels per circle will only be minimally changed by the corners, because we generated a random pattern. So, we can, in fact, merely map it onto the cube view, and merely ignore the pixels of the pattern that don't actually end up on part of the cube view. Of course, you've got to figure out which way the pattern is to be mapped across the edges, which is actually quite simple: take which ever face the center of the pattern is on (i.e. the intersection of the vertex normal), and take the four sides which share edges with it. You then "set it up" in the following way:

Image37.gif


Of course, in practice you're not going to actually set them up like that on a separate image, your merely going to treat the separate sides of the cube view as if they lined up like that.

But how does the pattern sampling approximate the fade off with angle? Well, it's actually quite simple, if each circle has only four pixels, then all circles have equal contribution. And, since the outer circles are bigger then the ones nearer to the center, they have a much smaller size to contribution ratio!

Also, the pattern should have twice the dimensions of any given side of the cube view. In other words, if you had a cube view where each side was a 16x16 image, then the pattern would need to be 32x32. The reason for this is that that way the pattern will be sampling from approximately half of the cube, which amounts to the same thing as 180 degrees. That means that your surface won't be getting light from things that are supposed to be behind it! And if you were to have a smaller resolution, it wouldn't get the full 180 degrees, so your sampling would be too limited, and it wouldn't look right.

The last and final step of radiosity mapping is your application of it to the 3d model. There are two different ways to go about doing this. The first is to find the radiosity base lighting values for each vertex in the model, and then use gouraud shading, and the second is to use the vertex surface normal intersections with the cube as UV coordinates, and the map the entire cube view onto the model. I would suggest the gouraud shading method, because doing true full texture mapping of it would be extremely time consuming, since you'd have to do the pattern filter for ever single pixel on the screen that the object takes up.

Anyway, when you have the radiosity lighting value for a vertex, you then add that lighting value (not averaging!) to the light values obtained from the normal light source lighting algorithms.

A few last words of recommendation for if/when anyone implements this. First, you do not need to have a high resolution cube view. In fact, it would be just fine to have each side of the cube view be 16x16, or even 8x8. Think about it, if each side is 16x16, then your going to have 256 pixels for each side, which is plenty! Of course, with 8x8, you're going to have a bit of degradation in the radiosity quality (only 64 pixels a side), but it probably won't make that much of a difference.

Second, you might want to check to see if there would be any significant change in the objects radiosity before you decide to update it (i.e. selective updating of the cube views). That would be done by checking to see if the object it self has moved any great deal, and if any other dynamic objects or light sources in the vicinity have moved significantly.


[size="5"]Dynamic Light Source Radiosity

So, now we have our happy little dynamic objects running around with radiosity, and everything is great, right? Well, no. There is still another problem. That problem is dynamic light sources.

In most games, the radiosity is pre-calculated for the level (i.e. walls, floors, ceilings, stair cases, anything that never moves), and that works fine as long as there are no moving light sources. But guess what, there are moving light sources! Whenever you fire a rocket, you are creating a moving light source. And whenever you shoot your gun, it creates a flash of light coming from where you are. So, that messes up the pre-calculated radiosity, because to have it be accurate, you have to recalculate the radiosity in one way or another.

So, now the question is: how do we re-calculate the radiosity fast enough to do in real time? And this second method of mine is the answer.

The method is actually quite simple, really. All you have to do is pre-store contribution levels (which I'll explain in a second), and use thresholds to determine whether the radiosity has to be updated.

So, let's consider a typical scene... perhaps a room. And let's say that each wall is a different color (red, green, blue, and yellow). So, with traditional methods, the radiosity would be pre-calculated, with the walls subdivided into multiple squares, each of which has a separate lighting value. Now, what happens when a dynamic light source comes waltzing in? Well, you could recalculate all of the radiosity of the scene, re-subdividing everything, and re-calculating all of the distances and such. But that would take forever!

Instead, how about we keep the subdivision information, and only recalculate the distances between them? Well, those distance calculations would be a pain also, so why not maintain all of the distances in an information tree? Well, it would still take a while to calculate how much light was radiated from each of the squares to each other one. So, why don't we just maintain an information tree of how much light every single square contributes to each other one? Then all we would have to do would be to find the regular lighting value of each square (i.e. the one before radiosity), and then pump those lighting values through a weighting algorithm, using the pre-stored contribution levels as the weights for each respective square.

But there's still something not right with this. First of all, with a tree that huge, it would take quite a bit of time to find the correct contribution information between two squares no matter what sorting algorithm you used! So, we have to find some way to cut this list down. There is one easy way to do this: thresh-holds. When you are pre-calculating the light contribution levels for each square, only record ones that are above a certain level (i.e. the ones that actually have some significance), then any light contribution that won't make a difference anyway won't be stored in the tree!

But even with all of this, it would still be extremely slow to do all of this dynamically. So, guess what, we use thresholds once again! Except, in this case, we use them in real-time, not just to eliminate useless information from the pre-calculated tree.

The basic idea is that we don't want all of our pre-calculated radiosity to go to waste (and by radiosity, I mean the actual lighting values, not the contribution levels). So, what we do is we, by default, render the room with the pre-calculated lighting values.

Then when a dynamic light source comes waltzing along, you check to see how much it affects each square. Each square will have a thresh hold of change, and only if it's lighting is changed more then the amount allowed by the thresh hold from its pre-calculated lighting values will the program actually take the time to re-calculate the radiosity for the other squares it effects.

Of course, there is one little detail through all of this that I have neglected to mention: the pre-calculated radiosity in games is not stored as squares as in non-real time rendering engines, it is stored as light maps (which are like texture maps, except with lighting values). So that puts our entire idea to pot, right? Wrong. What we can do is place un-rendered vertices on the surfaces of walls and such, which correspond to pixels on the light maps. Each vertex will have the contribution information, and the thresholds as to whether or not the radiosity is re-calculated. So, when you light the walls with a dynamic light source, you give the vertices those light values, and proceed as I described, except that if the radiosity does need to be re-calculated, you modify the light map by interpolating the lighting values from the vertices across the corresponding pixels of the light map.

As for choosing where to place the vertices, you should probably place more vertices in areas of high radiosity (i.e. where there is a high probability that there will be significant radiosity change, such as corners, where there are two surfaces very close to each other).


[size="5"]Dynamic Object to Static Object Radiosity

We now have a method for calculating the radiosity of dynamic objects, and for calculating the radiosity change caused by dynamic light sources. So now we should be fine and dandy, right? Well, seeing as this is a new section, and I have mentioned earlier that there is a third algorithm in this paper, I'm guessing that you know that we are not fine (although, perhaps not why).

The reason our radiosity solution is still not complete is because, although we have a method for calculating the radiosity that is caused from the static objects to the dynamic objects, we do not have a method for calculating the radiosity that dynamic objects contribute to static objects!

Well, there is a rather simple way of doing this. Basically, when you light your dynamic object, make certain that you maintain a list of lighting values for surface normals. In other words, average all of the lighting values out of vertices that have a surface normal facing approximately right, and do the same for the ones that are facing approximately left, back, forward, up, and down. You now have a nice little list of the lighting values for each of the main directions.

Now comes the part which actually approaches complex (but doesn't quite get there). What you do now is you make a light source at the center of the dynamic object (which is casting the radiosity), and for each light to surface vector that is calculated, you check to see how close it is to being up, down, left, right, etc. And you use the values that result from that directional check as weights. You then take those weights to determine what the lighting value of that particular light to surface vector is (based on the lighting values obtained from the grouping of surface normals).

The reason you don't just find one lighting value for the light source is because there is going to be different intensities and colors of light reflected in different directions by the object.


[size="5"]Conclusion

So, we now have three methods (two of which are not very well documented here) which, when combined, produce a full scene dynamic radiosity solution. I truly hope to see these methods implemented in a graphics engine soon, and I would be happy to help clear any confusion that may have been raised from reading this article.

Also, I realize that I was very quick and dirty in explaining the second two methods. There are a couple of reasons for that. The first of those reasons is because I have not fully developed them yet, even if I do have the basic ideas worked out. The second is because I merely felt that I needed to get the article out, and I had already been working on it for about two months. I might write another article or two going more in-depth into those last two methods, but will not promising anything. And even if I do write another article on the subject, it will be quite a while from now.

However, I would be delighted to see someone else write an article explaining the implementation of the methods, or even just recapping what I have written in a way that can actually be understood ;) (particularly the last two methods). I would be happy to help explain the methods through discussion (not in one big e-mail, since then I might as well just write another article) to anyone who would like to write a better article on the last two methods (or perhaps even the first).

Send questions/comments to: [email="nathan_vegdahl@yahoo.com"]nathan_vegdahl@yahoo.com[/email]


[size="5"]Thank yous

I received an onslaught of e-mails from the previous article... and I loved it! Thanks to each and every one of you who e-mailed me about the article, for whatever reason!

And a special thanks to all of you who pointed out errors in the radiosity mapping method of the previous article, and possible optimizations to it.

All of you know who you are, and the only reason I don't list you is because that would require pages of credits!
Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!

This follow-up to the original article gets into the actual method, explaining radiosity mapping, dynamic light source radiosity, and dynamic object to static object radiosity.

Advertisement
Advertisement