Jump to content

  • Log In with Google      Sign In   
  • Create Account

A radiometry question for those of you who own Real Time Rendering, 3rd Edition

  • You cannot reply to this topic
15 replies to this topic

#1 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 12:47 AM

I'm a bit confused by the following diagram (7.4) on page 207 of Real Time rendering, 3rd Edition:

 

2014-10-26_1634.png

 

The text before and after I think makes a few assumptions and doesn't quite explain what I'm looking at well enough. What I understand:

  • dw is the change in steradians/solid angle w
  • dA is the area irradiated by the light ray with respect to dw
  • n is obviously the surface normal
  • The orange line is clearly a light ray

Why am I confused? The light ray is apparently hitting a surface (hence the surface normal pointing away from the surface), and the angle depicted is the angle between the ray hitting the surface, and the surface normal. But what is the circle at dw indicating? My understanding was that the solid angle is easily visualised as a cone apexed at the light source, with the circle representing a patch surface on a sphere enclosing the light source. Why then is that circle in the diagram apparently situated at the light source? It seems to me that the cone in the diagram should be facing the other way.

 

p.s. I hope the authors don't mind me displaying this diagram. I will remove it and replace it with my own hand-drawn version if requested to do so.


Edited by axefrog, 26 October 2014 - 12:50 AM.

I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


Sponsor:

#2 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 01:06 AM

Actually looking at it a bit more, I'm wondering if the small orange circle is indicating that the light source is on the other side (i.e. not shown, and much further to the top left of the diagram) and in effect we're looking at the hypothetical patch surface hit by the light ray and then a vertical surface projected onto from there, or in other words, that diagram would be better drawn with the orange circle touching the blue oval. I could be completely wrong, but that's how I'm thinking it's supposed to be interpreted. Similarly, I'm wondering if the cone appears to be backwards because it's just trying to show the dispersion of the light ray (or w) with respect to the light ray projecting onto the blue surface?


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#3 Álvaro   Crossbones+   -  Reputation: 13368

Like
0Likes
Like

Posted 26 October 2014 - 05:31 AM

I don't have the book, but I think I might be able to help.

Why then is that circle in the diagram apparently situated at the light source?


You need to imagine multiple light sources, or light sources that are not points. You are interested in summing the contributions from light sources in the circle.

If that's not clear enough, ask again.

#4 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 06:57 AM


You need to imagine multiple light sources, or light sources that are not points. You are interested in summing the contributions from light sources in the circle.

 

I'm not sure that's correct, is it...? The red circle is labelled dw (where w is omega) indicating that it represents a specific solid angle (or change therein), which is measured in steradians. So if what you said is correct, what's the significance of dw and why would the writer use a tiny circle to indicate some arbitrary group of light rays from just that circular area, of which the orange line is one?


Edited by axefrog, 26 October 2014 - 06:58 AM.

I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#5 Álvaro   Crossbones+   -  Reputation: 13368

Like
0Likes
Like

Posted 26 October 2014 - 07:18 AM

What's the significance of dw? We are integrating some quantity (radiance) and the dw indicates what we are integrating over. People often think of dw as representing a tiny solid angle, which the author drew as a little circle, but this is mostly just a way of developing intuition for it.

If you are having a hard time understanding surface integrals, you are not alone. There is a subject called Differential Geometry that deals with all these things rigorously, but it's not for the faint of heart.

#6 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 02:27 PM

Alvaro, I appreciate your answers, I just have a feeling that they're lacking the context of the preceding pages. The way book is structured, each chunk of content builds on what comes before it, starting off simply enough that the book is very approachable, which is one of the reasons it's so good. The diagrams and illustrations in the book are no different (so far) in this regard, hence why the question was kind of targeted at people who have a copy of the book. I'm honestly not disagreeing with your answer, I was just hoping for an answer that included some context from what comes before the diagram. The mathematics are not really a problem, and while my knowledge is patchy with regards to calculus, I have been switching out and learning the topics I need, as needed. If there's something new to learn here in that regard, I have no problem doing that.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#7 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 04:51 PM

Hmm... I'm thinking reading ahead may have been a good idea. The red circle may actually represent a patch surface on the surface of the light source.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#8 Álvaro   Crossbones+   -  Reputation: 13368

Like
2Likes
Like

Posted 26 October 2014 - 07:20 PM

You need to imagine multiple light sources, or light sources that are not points. You are interested in summing the contributions from light sources in the circle.



#9 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 07:43 PM

Ah, right, my bad, thanks.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#10 Bacterius   Crossbones+   -  Reputation: 8945

Like
3Likes
Like

Posted 26 October 2014 - 07:50 PM

Technically you really should be considering area lights. Point lights are an unphysical hack that don't sit well with radiometry and need special treatment. If you consider area lights then it makes perfect sense: you are integrating over the projected surface area of the light (over the solid angle subtended by that light source from your point of view) and hence calculating the total amount of radiation the light is emitting towards you. The dw is a differential solid angle and does not represent a surface area, dw is an infinitesimal area of the unit sphere (it has units steradians) and is usually defined as dw = sin(theta) dtheta dphi, via the following parameterization of the unit sphere (or hemisphere in the illustration below):

 

9gxCkK9.png

As Alvaro said, sometimes people think of it as a cone (or occasionally a direction) because it's more intuitive. If you are still confused, you don't integrate over the light's surface area directly, but over its surface area projected over your field of view (the hemisphere above) since that is the definition of radiance: the amount of light falling onto some surface of interest through some differential solid angle. And that's really all you are interested in.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#11 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 26 October 2014 - 07:58 PM

Thanks, I think I'll need to read over that more than once before I properly get it. I haven't gotten as far as area lights yet, so I have no concept of what they are. I also need to revise integration, which I will do in the next day or so.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#12 CDProp   Members   -  Reputation: 990

Like
5Likes
Like

Posted 29 October 2014 - 11:01 AM

Radiance is sort of an abstract quantity, but I think it's not so bad if you think about it in terms of it's dimensions. When rendering, we like to think of light in terms of geometrical optics. So, instead of light being waves with a continuous spread of energy, they are discrete rays that shoot straight from the surface you're rendering to the pixel element on the screen. This takes some doing, however, because in reality, light is a continuous wave (classically-speaking -- no intention of modeling things at the quantum level here).

So how do you turn a continuous quantity like an EM wave into a discrete ray? By analogy, consider mass. As a human, you have a certain mass. However, that mass is not distributed evenly in your body. Some tissue is more dense than others. For instance, muscle is more dense than bone. Lets say you knew about the mass density function for your body. That is, if someone gives you a coordinate (x,y,z) that is inside your body, you can plug it into the function and the result will be the mass density at that coordinate. How would you calculate the total mass of your body with this function? Well, you would split up the volume into a bunch of tiny cubes, and you sample the density function in the center (say) of those cubes, and then multiply that density by the volume of the cube to get the mass of that cube, then add up the masses of all the tiny cubes. The tinier the cubes, the more of them you'll have to use, but this will make your mass calculation more accurate. Where integral calculus comes into play is that it tells you the mass you get in the limiting case where the cubes are infinitely tiny and there are infinitely many of them. In my opinion, it's easier to reason about it as "a zillion tiny small cubes" and just remember that the only difference with integral calculus is that you get an exact answer rather than an approximation.

So consider a surface that you're rendering. It is reflecting a certain amount of light that it has received from a light source. We want to think of the light in terms of energy, unlike the mass example. The surface as a whole is reflecting a certain amount of energy every second, which we call the energy flux (measured in Watts, also known as Joules/sec). However, we don't really care what the entire surface is doing. We just want the energy density along a specific ray. So, let's break the surface down into tiny little area elements (squares) and figure out how much flux is coming from each tiny area element. We only care about the area element that is under our pixel. That gives us a flux density per unit area, which is called Irradiance (or Exitance, depending on the situation). So now we know the energy flux density being emitted from the area under our pixel. But wait! Not all of that energy is moving toward our pixel. That little surface element is emitting energy in all directions. We only want to know how much energy is moving in the specific direction of our pixel. So, we need to further break down that Irradiance according to direction, to find out how much of that Irradiance is being emitted along each direction (a.k.a. infinitesimal solid angle). This gives us an energy density with respect to time, area, and solid angle, known as Radiance.

Edited by CDProp, 29 October 2014 - 12:59 PM.


#13 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted 29 October 2014 - 01:38 PM

Great explanation, thanks! I have hit my "Calculus for Dummies" book to relearn and differentiation and integration, as they're both subjects I haven't looked at since high school, and after 15+ years, they've faded significantly.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#14 CDProp   Members   -  Reputation: 990

Like
0Likes
Like

Posted Yesterday, 09:54 AM

I think one of the reasons that radiance is confusing is that it is used in a few different situations, interchangeably, and people don't often specify when they're switching from one to the other. I think the important thing to remember is that we want to quantify the "amount of light" that is contained in a single ray. That's all. The ray could be drawn from a light source to a surface, or from a surface to the camera pixel, or whatever. So, you'll see both in various diagrams. The information I posted concerns what is meant by "amount of light" -- if you look at the physics of it, and do the dimensional analysis, you see that quantity of light present in a ray has to be an energy flux density of some kind. In this case, a flux density with respect to both area and solid angle. 

 

Concerning the calculus, I do think it's a good idea to brush up, but I don't think that the actual methods of integration and differentiation will be as important as just understanding conceptually what integration and differentiation are, and that they are inverses of one another much in the same way that division is the inverse of multiplication. It's not like we're doing the full calculus anyway. We're not actually performing a triple integral (over the area of the pixel, the hemisphere of incoming rays at each point, and the exposure time) in order to get the actual photon energy hitting the pixel. We're simply assuming that the radiance doesn't change much within those limits, and thus the radiance is assumed to be directly proportional to the energy collected by the pixel while the "shutter" was open. The tone mapping step decides the constant of proportionality.

 

By analogy, it's as if you made the assumption that the density of a ball is uniform throughout, and so instead of integrating over the ball's density function over the volume of the ball, you just sample the density function in the middle of the ball and multiply that by the ball's volume. It's an approximation, but if the ball's density is pretty much uniform throughout, then it's a very good approximation.

 

The calculus that you see in these books and papers is there for the sake of being precise while explaining the theory, but the stuff that is actually coded is only an approximation of the theory.



#15 axefrog   Members   -  Reputation: 414

Like
0Likes
Like

Posted Yesterday, 03:04 PM

By the time I'm done with this phase of my learning (I know it'll take a while to cram it all into my head), I'd like to have been able to implement a number of good quality lighting algorithms, global illumination included, along with reasonably-nice quality ambient occlusion, dynamic shadows and have an understanding of depth fields and so forth. I'd like to look at what I've rendered and feel that it's looking really nice just based on the lighting alone, even if all of the polygons are simply vertex coloured.

 

For me to achieve these goals, what would you say is required in terms of advanced calculus and other related fields? How deep down the math hole should I go? I'll do whatever it takes, but guidance is always appreciated to save me possibly wasting my time, or using it inefficiently.


I'm blogging about my journey to learn 3D graphics and game programming: http://nathanridley.com


#16 CDProp   Members   -  Reputation: 990

Like
0Likes
Like

Posted Today, 08:30 AM

I'm not that far ahead of you, so someone else might have a better idea that I do. I work with realtime graphics (OpenGL) but not games per se, and my company's graphics needs are very modest compared to that of AAA game companies. I have been trying to push things like HDR and PBR, but that would require an R&D outlay that my company is not prepared to spend, and so I find myself trying to fit those things in during my spare time. Unfortunately, I am also in school full-time as a physics student, and so my time for side projects is nil. </life story>

 

My best guess is that it would be good to practice your calculus so that you can understand the theory a little better. I still think that understanding what an integral does is more important that understanding how to actually do one. If you're reading a paper on radiometry and you see an integral, you want to be able to reason about what quantity the integral will produce. You'd be surprised how much you can learn and understand simply by thinking about the units. "Aah. This is a mass density function, so if I integrate it over this volume, I'll get the mass of that volume." Being able to actually do the integral is less important. There may even be areas in graphics that involve differential equations, but once again, I think that understanding why you need to solve the equation, and what you get out of it, is more important than actually being able to solve the equation itself.

 

I would also guess that Linear Algebra is a lot more directly applicable. It's not just the fact that you're multiplying matrices and vectors, there are a lot of concepts from Linear Algebra (vector spaces, function spaces, inner products, orthogonality) that carry over into other things, like Fourier Analysis (which you might run into if you want to do ocean rendering or edge detection or any graphics technique that requires a pass filter or convolution). 

 

Good luck!







PARTNERS