A radiometry question for those of you who own Real Time Rendering, 3rd Edition

Started by
14 comments, last by CDProp 9 years, 5 months ago

Thanks, I think I'll need to read over that more than once before I properly get it. I haven't gotten as far as area lights yet, so I have no concept of what they are. I also need to revise integration, which I will do in the next day or so.

Advertisement
Radiance is sort of an abstract quantity, but I think it's not so bad if you think about it in terms of it's dimensions. When rendering, we like to think of light in terms of geometrical optics. So, instead of light being waves with a continuous spread of energy, they are discrete rays that shoot straight from the surface you're rendering to the pixel element on the screen. This takes some doing, however, because in reality, light is a continuous wave (classically-speaking -- no intention of modeling things at the quantum level here).

So how do you turn a continuous quantity like an EM wave into a discrete ray? By analogy, consider mass. As a human, you have a certain mass. However, that mass is not distributed evenly in your body. Some tissue is more dense than others. For instance, muscle is more dense than bone. Lets say you knew about the mass density function for your body. That is, if someone gives you a coordinate (x,y,z) that is inside your body, you can plug it into the function and the result will be the mass density at that coordinate. How would you calculate the total mass of your body with this function? Well, you would split up the volume into a bunch of tiny cubes, and you sample the density function in the center (say) of those cubes, and then multiply that density by the volume of the cube to get the mass of that cube, then add up the masses of all the tiny cubes. The tinier the cubes, the more of them you'll have to use, but this will make your mass calculation more accurate. Where integral calculus comes into play is that it tells you the mass you get in the limiting case where the cubes are infinitely tiny and there are infinitely many of them. In my opinion, it's easier to reason about it as "a zillion tiny small cubes" and just remember that the only difference with integral calculus is that you get an exact answer rather than an approximation.

So consider a surface that you're rendering. It is reflecting a certain amount of light that it has received from a light source. We want to think of the light in terms of energy, unlike the mass example. The surface as a whole is reflecting a certain amount of energy every second, which we call the energy flux (measured in Watts, also known as Joules/sec). However, we don't really care what the entire surface is doing. We just want the energy density along a specific ray. So, let's break the surface down into tiny little area elements (squares) and figure out how much flux is coming from each tiny area element. We only care about the area element that is under our pixel. That gives us a flux density per unit area, which is called Irradiance (or Exitance, depending on the situation). So now we know the energy flux density being emitted from the area under our pixel. But wait! Not all of that energy is moving toward our pixel. That little surface element is emitting energy in all directions. We only want to know how much energy is moving in the specific direction of our pixel. So, we need to further break down that Irradiance according to direction, to find out how much of that Irradiance is being emitted along each direction (a.k.a. infinitesimal solid angle). This gives us an energy density with respect to time, area, and solid angle, known as Radiance.

Great explanation, thanks! I have hit my "Calculus for Dummies" book to relearn and differentiation and integration, as they're both subjects I haven't looked at since high school, and after 15+ years, they've faded significantly.

I think one of the reasons that radiance is confusing is that it is used in a few different situations, interchangeably, and people don't often specify when they're switching from one to the other. I think the important thing to remember is that we want to quantify the "amount of light" that is contained in a single ray. That's all. The ray could be drawn from a light source to a surface, or from a surface to the camera pixel, or whatever. So, you'll see both in various diagrams. The information I posted concerns what is meant by "amount of light" -- if you look at the physics of it, and do the dimensional analysis, you see that quantity of light present in a ray has to be an energy flux density of some kind. In this case, a flux density with respect to both area and solid angle.

Concerning the calculus, I do think it's a good idea to brush up, but I don't think that the actual methods of integration and differentiation will be as important as just understanding conceptually what integration and differentiation are, and that they are inverses of one another much in the same way that division is the inverse of multiplication. It's not like we're doing the full calculus anyway. We're not actually performing a triple integral (over the area of the pixel, the hemisphere of incoming rays at each point, and the exposure time) in order to get the actual photon energy hitting the pixel. We're simply assuming that the radiance doesn't change much within those limits, and thus the radiance is assumed to be directly proportional to the energy collected by the pixel while the "shutter" was open. The tone mapping step decides the constant of proportionality.

By analogy, it's as if you made the assumption that the density of a ball is uniform throughout, and so instead of integrating over the ball's density function over the volume of the ball, you just sample the density function in the middle of the ball and multiply that by the ball's volume. It's an approximation, but if the ball's density is pretty much uniform throughout, then it's a very good approximation.

The calculus that you see in these books and papers is there for the sake of being precise while explaining the theory, but the stuff that is actually coded is only an approximation of the theory.

By the time I'm done with this phase of my learning (I know it'll take a while to cram it all into my head), I'd like to have been able to implement a number of good quality lighting algorithms, global illumination included, along with reasonably-nice quality ambient occlusion, dynamic shadows and have an understanding of depth fields and so forth. I'd like to look at what I've rendered and feel that it's looking really nice just based on the lighting alone, even if all of the polygons are simply vertex coloured.

For me to achieve these goals, what would you say is required in terms of advanced calculus and other related fields? How deep down the math hole should I go? I'll do whatever it takes, but guidance is always appreciated to save me possibly wasting my time, or using it inefficiently.

I'm not that far ahead of you, so someone else might have a better idea than I do. I work with realtime graphics (OpenGL) but not games per se, and my company's graphics needs are very modest compared to that of AAA game companies. I have been trying to push things like HDR and PBR, but that would require an R&D outlay that my company is not prepared to spend, and so I find myself trying to fit those things in during my spare time. Unfortunately, I am also in school full-time as a physics student, and so my time for side projects is nil. </life story>

My best guess is that it would be good to practice your calculus so that you can understand the theory a little better. I still think that understanding what an integral does is more important that understanding how to actually do one. If you're reading a paper on radiometry and you see an integral, you want to be able to reason about what quantity the integral will produce. You'd be surprised how much you can learn and understand simply by thinking about the units. "Aah. This is a mass density function, so if I integrate it over this volume, I'll get the mass of that volume." Being able to actually do the integral is less important. There may even be areas in graphics that involve differential equations, but once again, I think that understanding why you need to solve the equation, and what you get out of it, is more important than actually being able to solve the equation itself.

I would also guess that Linear Algebra is a lot more directly applicable. It's not just the fact that you're multiplying matrices and vectors, there are a lot of concepts from Linear Algebra (vector spaces, function spaces, inner products, orthogonality) that carry over into other things, like Fourier Analysis (which you might run into if you want to do ocean rendering or edge detection or any graphics technique that requires a pass filter or convolution).

Good luck!

This topic is closed to new replies.

Advertisement