Radiometry: Why are solid angles used?

Started by
6 comments, last by zurekx 16 years, 1 month ago
Hi! [smile] I originally posted this in the "For Beginners" forum, since i thought it was a too basic question to post here, but was told i'ld have much better luck in this forum. I'm trying to learn more about lighting in graphics, and it seems that radiometry is the vocabulary used for advanced stuff. I'm having a hard time understanding many of the very basic radiometry concepts, though - some of it is very confusing to me and has been bugging me for a very long time. This is basically my question:
  • Why do we need to use a solid angle in the various radiometry equations, instead of directions? Let's say i want to know the light (power) which hits a point x on a surface from a certain direction d (i.e. i want to know the incident radiance). The way i would do it, would be to just look at the incoming light from the given direction d, but in radiometry they look at the incoming light from a solid angle instead. It seems to me that a solid angle is not just one direction, but many directions (those directions along all the vectors from x and to the every points on the solid angle on the hemisphere). So when using a solid angle, i'm not just getting the light from direction d, but also from a lot of other directions, which i were not interested in...
My question rephrased as 3 questions: (basically the same question 3 times, but they may get you an idea of where i stand)
  • Why are solid angles needed instead of normal directions (like, vectors or spherical coords)?
  • If it is true that a solid angle represents several directions, why is it used instead of a single direction?
  • Normally in computer graphics, we use rays for simulating how light travels (afaik), but in radiometry they use solid angles. Why? (isn't it extremely costly to calculate this??)
I would be very grateful for any help or pointers! [smile] I haven't found anywhere which explains basic stuff like this is explained - all the places i've seen, they skip over why it is done the way it is done and jump directly into how it's done. :(
Advertisement
Quote:Original post by ZaiPpA
I'm trying to learn more about lighting in graphics, and it seems that radiometry is the vocabulary used for advanced stuff.

'Radiometric' only means that you are dealing with actual physical quantities instead of the generic colours of standard computer graphics. So instead of saying "this point has a colour of [0,255,0]", you'd say "this point has a exiting radiance of 100W/m²/sr at the wavelength of 520nm". It doesn't really mean anything else than that.

In computer graphics, you are rarely interested in radiometry, since this will cover the entire electromagnetic spectrum, from radio waves, over infrared, visible light, UV, X-rays, and eventually up to cosmic rays. What you are actually interested in is photometry. This equals radiometry, but weighted by the visual response curves of the human eye. Essentially, it deals with light that is actually visible to us.

Besides this distinction, radiometry and photometry are very similar in concept. Be aware though, that the units and terminologies are very different: Watt becomes Lumen, W/sr becomes Lux, Radiance becomes Luminance, etc.

Quote:Original post by ZaiPpA
Let's say i want to know the light (power) which hits a point x on a surface from a certain direction d (i.e. i want to know the incident radiance).

The way i would do it, would be to just look at the incoming light from the given direction d, but in radiometry they look at the incoming light from a solid angle instead.

Looking at a certain direction doesn't make any sense from a physical point of view. A direction is just a two dimensional line, without any 'thickness'. It can't even exist in 3-dimensional space. Such, it cannot transport any photons, which are needed to illuminate your point. What illuminates your (differential) surface area is a cone of light. Imagine such a cone, with it's apex sitting on your infinitely small point, and its base spreading outwards. All the light coming in over that cone has to be accounted for. The maximum amount of light you can get is when your solid angle subtends the entire hemisphere, ie. a solid angle of 2pi.

Quote:Original post by ZaiPpA
It seems to me that a solid angle is not just one direction, but many directions (those directions along all the vectors from x and to the every points on the solid angle on the hemisphere).

Thats right. Any given (non-imaginary) solid angle covers an infinite amount of directions.

Quote:Original post by ZaiPpA
So when using a solid angle, i'm not just getting the light from direction d, but also from a lot of other directions, which i were not interested in...

Oh, but you are interested in those 'other directions' ! Keep in mind that light hitting a surface in reality doesn't only come along the normal. It comes in from all around the hemisphere, from an infinite number of directions simulaneously. This light is integrated over the receiving surface, which is conceptually divided into infinitely small patches.

Quote:Original post by ZaiPpA
  • Why are solid angles needed instead of normal directions (like, vectors or spherical coords)?


As said above, because you have to account for all the light coming in from all directions, not only along an imaginary line (which doesn't exist in reality).

Quote:Original post by ZaiPpA
  • If it is true that a solid angle represents several directions, why is it used instead of a single direction?


  • A solid angle represents a cross-section on a sphere, and thus an infinite amount of discrete directions.

    Quote:Original post by ZaiPpA
  • Normally in computer graphics, we use rays for simulating how light travels (afaik), but in radiometry they use solid angles. Why? (isn't it extremely costly to calculate this??)


  • This doesn't really have anything to do with radiometry. What you are thinking about is 'global illumination'.

    The solid angle, and more important the projected area, are just mathematical concepts used to represent the incoming light over the hemisphere (or parts thereof) and how it affects the underlaying surface. Many different ways to calculate this light exist. In all boils down to calculating the integral over this hemisphere. You can use individual rays for this, for example through Monte Carlo integration. Here, the (projected) solid angle is taken into account by varying the probabilities appropriately.

    And yes, you even use projected area (and thus the solid angle) when doing your good old dot-product lighting (L dot N) ! This is actually where the cosine term in the simple lambertian lighting model comes from.
    I have a question on this topic too, since I've had some problems getting the whole solid-angle thing. It's especially the term radiance that confuses me.

    Okey, so I have flux, and then I have irradiance, which is the watt per square metre. But the Radiance, is that the same thing as the irradiance, just multiplied with some fator? To get it per solid angle (area of the hemisphere)? I dont really get the term radiance (I know its W/m^2/sr, but it doesnt say anything to me). Would be nice with an example (if I have a point emitting some light in a direction and so on..)
    Quote:Original post by zurekx
    I have a question on this topic too, since I've had some problems getting the whole solid-angle thing. It's especially the term radiance that confuses me.

    Okey, so I have flux, and then I have irradiance, which is the watt per square metre. But the Radiance, is that the same thing as the irradiance, just multiplied with some fator? To get it per solid angle (area of the hemisphere)? I dont really get the term radiance (I know its W/m^2/sr, but it doesnt say anything to me). Would be nice with an example (if I have a point emitting some light in a direction and so on..)


    Radiance is outgoing energy. Irradiance is incoming energy. So the Sun radiates photons, and they irradiate the Earth. They are simply the inverse of each other.

    The formula for the surface area of a sphere is 4*pi*r^2. It's no coincidence that the total solid angle is 4*pi.

    To help visualize solid angle, it might be useful to think of the Earth as not a sphere, but as a disc that faces the Sun. Now, visualize a cone using this disc as its base, with the tip of the cone at the centre of the Sun. As you move the disc further and further away from the Sun, the cone narrows, and the portion of the Sun's surface that it intersects gets smaller and smaller (remember, solid angle is analogous to surface area where spheres are concerned). As you move away, the solid angle drops off at the rate of r^2. This is the root meaning behind the inverse-square law of electromagnetism and Newtonian gravitation.

    According to the Stefan-Boltzmann law, the energy radiated per second per square metre by an ideal blackbody is given by the equation E = sigma*T^4. Here, sigma is the Stefan-Boltzmann constant (5.67051e-8), and T is temperature in units Kelvin. Using an estimated temperature for the Sun's surface of 5786 Kelvin, the energy is:

    E_unit = 5.67051e-8 * 5786^4 = 63509044 Joules (per square metre per second)

    Using an estimated radius for the Sun of 6.96e8 metres, its surface area is:

    A = 4*pi*r^2 = 6.08735e18 square metres

    So, the total energy emitted by the Sun per second from its entire surface is:

    E_total = E_unit * A = 3.86601e26 Joules (per second)

    Using Mars as an example, we calculate the solid angle it occupies from the perspective of the Sun by constructing the aforementioned "disc", then dividing the disc's total area by its distance from the Sun squared (r^2). The estimated radius of Mars (r_m) is 3389950 metres, and its average distance from the Sun is 227936637500 metres. The solid angle is:

    omega = (pi*r_m^2) / r^2 = 6.94877e-10 (steradians)

    So, the aforementioned "cone" intersects omega / (4*pi) of the Sun's total surface area:

    x = omega/(4*pi) = 5.52966e-11 (a dimensionless measure)

    The total amount of energy received by Mars is then:

    E_in = E_total * x = 2.13778e16 Joules (per second)

    By reversing the Stefan-Boltzmann equation, we can estimate the average temperature of Mars by assuming that the total input energy is equal to total output energy (this is another property of the ideal blackbody).

    T_m = (E_in / (4*pi*r_m^2) / sigma)^(1/4) = 226 Kelvin

    This is very close to the observed average of around 227 Kelvin (-46 Celcius... a very cold winter day in Regina, brrrr, no fun!).

    P.S. (n)^(1/4) signifies the 4th root of n.

    [Edited by - taby on March 25, 2008 11:03:02 PM]
    Thank you very much for your answer, Yann L!

    Sorry for late reply, i was away for Easter and had to spend 2 days to think hard about it and reread articles and take notes. You made some good points! I think you have helped me gaining a much better understanding, so thank you very much!!

    I see that the reasons for radiometry being as it is, is because of physics - which is good since that is what we're trying to simulate. [smile] I will try to briefly state my current understanding of it - i would really appreciate if someone would correct me or say that i'm right, so that i can be sure that i have understood it correctly - i don't know much about physics so this is a bit of guess-work for me.


    Points vs. area: (patches)

    Rather than points being lit op (which has no area), areas are being lit up in radiometry/physics (there is no such thing as a point in reality which can be lit up (a point is smaller than an atom! even a quark!)).

    The way you can talk about points anyway, is to choose a very small (infinitesimal) surface-area dA(x) centered around the point x. (a "patch"). Then you have an area.

    Light is transferred between patches (or larger areas), not between points!

    Is this correct?

    direction vs. solid area:
    Quote:Original post by Yann L
    Looking at a certain direction doesn't make any sense from a physical point of view. A direction is just a two dimensional line, without any 'thickness'. It can't even exist in 3-dimensional space. Such, it cannot transport any photons, which are needed to illuminate your point.
    Ok, i understand what you're saying. Since a line has no thickness, photons are 'too big' to travel through it. But still this doesn't mean that the group of photons couldn't travel in the same direction (inside a very thin cylinder), but then you say...
    Quote:Original post by Yann L
    What illuminates your (differential) surface area is a cone of light.
    So it sounds like photons always have slightly different directions (even in a lazer?). So i guess it is extremely unlikely (impossible?) that two photons have the exact same direction in physics? (if so, i understand... mathematically "exact" is after all exact to 'an infinite number of decimals'...)

    So..:

    So if we want to know what light from point x1 hits point x2, we're not looking at the points x1 and x2, but at the patches (small areas) dA(x1) and dA(x2) around x1 and x2.

    And then we could for example use the projected area (solid angle) of dA(x2) on the hemisphere at x1, to get the amount of power the patch dA(x1) at x1 transmits to the patch dA(x2) at x2. (the outgoing radiance Lo(x1, x1->x2))

    cone:

    One thing i'm a bit uncertain of, though.. For the light to travel along a cone, the projected area (solid angle) of the patch dA projected onto the hemisphere, should be 'round' and thus dA should be round as well.

    Hmm... Oh, wait... By "light traveling in a cone", you just mean that the photons 'spread out from each other the further distance they have traveled' right? (which follows from the fact that a solid angle represents all the directions from x to every point on the solidangle-area). So no matter if the solid angle is round or not, you can always choose a cone which contains the solid-angle (the intersection between the cone and the hemisphere doesn't have to be equal to the solid area), and the light will stay within this cone during its travel. Right?

    to conclude:
    • So when people are talking about "points" in the computer graphics articles which use radiometry, they actually don't mean points but small patches dA around the points?

    • When the articles are talking about light travelling along a line, they actually mean light travelling along a cone. In other words, when they talk about "directions" they are talking about the infinite number of directions represented by the solid angle. (approximated by the direction between the two points of the two patches)

    • Also, when we (and the articles) normally talk about light traveling in rays, it is wrong. It travels along a cone. So i guess rays are just an approximation to cones - which are good approximations if the cones are very 'thin'. Right?
    Right?

    (for example, the articles i've read use the following parameters to the radiance-function L: L(x, k), where x is a point and k is a direction. I guess this should be understood as that x implicitly has a patch dA(x) associated with it, and the direction implicitly has a solid angle dw (of some other patch projected onto the hemisphere of x) associated with it? (This will also make the equations make sense...))

    Other:
    Quote:Original post by taby
    Radiance is outgoing energy. Irradiance is incoming energy.
    Is it? I thought, loosely speaking, that radiance could be either incoming or outgoing (depending on what you look at - Li or Lo) power in a certain direction, while irradiance is the incoming power from all directions. (and radiant exitance is outgoing power to all directions) (by direction i guess i mean solid angle now... hehe :)) Maybe i'm wrong?

    Anyway, thanx for your example (and thanx to zurekx for the question). I now see the power of radiometry - it works not only on the microscopic scale but also on the macroscopic scale. Pretty cool!


    Thanx for any help! [smile]

    [Edited by - ZaiPpA on March 25, 2008 9:27:19 PM]
    Quote:Original post by ZaiPpA
    Quote:Original post by taby
    Radiance is outgoing energy. Irradiance is incoming energy.
    Is it?


    Yes indeed:

    http://dictionary.reference.com/browse/irradiance

    irradiance: "incident flux of radiant energy per unit area."

    http://dictionary.reference.com/browse/incident

    incident (adjective): "falling or striking on something, as light rays."

    Omnidirectional radiance is also known as isotropic radiation. Anything less than omnidirectional is anisotropic.

    Because we are fully enveloped in gas (and clothes, etc), we are radiated upon from every direction (the full 4pi steradians). According to the WMAP study of the cosmic background radiation, the temperature of the universe is roughly isotropic, with anisotropies at various scales. Nearby radiating masses (atoms, molecules, etc) represent further anisotropy.

    Even if we were not enveloped in gas, we would still be enveloped in the universe's inherent radiation.

    A lightbulb (omnidirectional) and and a laser (anisotropic) are radiators with vastly different beam divergences. This is related to your "equality of direction" question. If the direction of two emitted photons were to be exactly equal, then their divergence would be exactly 0... but that is a practical impossibility, as you've pointed out in your conclusions (which by the way, look to be all correct). The stastical nature of the direction in which photons are emitted (via braking radiation, via the acceleration of a charged particle) is always present to some degree. On the other hand, laser ranging of the Moon has been commonplace for many decades now. That's pretty darn precise work, in my opinion. The stuff they do today blows my mind.

    [Edited by - taby on March 25, 2008 11:27:06 PM]
    Quote:Original post by ZaiPpA
    I see that the reasons for radiometry being as it is, is because of physics - which is good since that is what we're trying to simulate. [smile]

    Good. But right from the beginning, let's just get rid of one more terminology confusion: radiometry/photometry versus global illumination. As I pointed about in my last post, radiometry and photometry are just ways to represent electromagnetic radiation (for the former) or visible light (for the latter). They don't dictate the use of eg. differential area instead of points, or surface interreflection. You could very well write a completely old-school Gouraud renderer, without solid angles, differential area, or cones of light - but still use radiometric or photometric units.

    What you are talking about is called global illumination (or GI for short). Good GI systems are usually implemented using photometry (radiometry is rare), because this is required to make them more physically correct. However, you could also write a GI solver without using radio/photometry at all.

    Quote:Original post by ZaiPpA
    Rather than points being lit op (which has no area), areas are being lit up in radiometry/physics (there is no such thing as a point in reality which can be lit up (a point is smaller than an atom! even a quark!)).

    The way you can talk about points anyway, is to choose a very small (infinitesimal) surface-area dA(x) centered around the point x. (a "patch"). Then you have an area.

    Light is transferred between patches (or larger areas), not between points!

    Is this correct?

    Absolutely correct.

    Quote:
    So it sounds like photons always have slightly different directions (even in a lazer?). So i guess it is extremely unlikely (impossible?) that two photons have the exact same direction in physics? (if so, i understand... mathematically "exact" is after all exact to 'an infinite number of decimals'...)

    Well, it depends. Theoretically, ie. in an ideal world without participating media and gravity, photons can have perfectly synchronous and parallel paths. In our world, they can't. But that's not really the point here.

    GI is a mathematical model to compute the approximate diffuse interaction between surface patches. The emphasis lies on diffuse. When light (any light, from the sun, over the sky, to an extremely collimated laser beam) hits a perfectly diffuse reflector, then this light will be equally spread over the entire hemisphere when reflected back. Such ideal diffuse reflectors are impossible in reality, as you will always have a certain amount of specularity. Better GI models support surfaces with non-isotropic reflection patterns, by the use of BRDFs (bi-directional reflectance distribution functions). But eventough not perfect, the model is still pretty good at producing convincing images.

    Now, the whole differential area model, being a diffuse light model, cannot be used on eg. a collimated laser beam. As you pointed out, laser light hitting a surface will not radiate from the entire hemisphere. Its incidence cone will be very small (although not a line, since perfectly parallel rays cannot exist). So in these situations, the mathematical approximation breaks down. However, it still holds up on the reflected laser light from the (diffuse) surface.

    Usually, a modern GI solver will include several energy transfer models, which are applied in different passes. You will, for example, start by distributing energy from point lights (which have no surface area) by using standard raytracing or shadow mapping. You can ditribute laser light in a similar way. Then you would continue with sun light (which is more or less directional) and maybe have a separate pass for skylight (while being emitted from a virtual hemisphere, it can be highly optimized by using other lighting models).

    Then, you can add the energy from area (surface) lights, which follow the standard differential area + form factor energy transfer model (ie. the radiosity equations).

    And finally, once you added all direct light as described above, you start the indirect lighting passes: the diffuse reflections between surface patches. This will be the computationally most expensive part (but also the most important for reallism).

    Quote:
    One thing i'm a bit uncertain of, though.. For the light to travel along a cone, the projected area (solid angle) of the patch dA projected onto the hemisphere, should be 'round' and thus dA should be round as well.

    It is round. Again, let's set the terminology straight:

    * something projected onto the hemisphere over a patch (another patch, or light cones) -> solid angle.
    * This solid angle projected down onto the differential area patch -> projected area.

    Quote:
    Hmm... Oh, wait... By "light traveling in a cone", you just mean that the photons 'spread out from each other the further distance they have traveled' right? (which follows from the fact that a solid angle represents all the directions from x to every point on the solidangle-area).

    Yes. That's why you have the inverse square rule on light intensity over distance. The solid angle of the light cone becomes larger with distance. So the same energy is spread over a larger solid angle, the farther away you are.

    Quote:
    So no matter if the solid angle is round or not, you can always choose a cone which contains the solid-angle (the intersection between the cone and the hemisphere doesn't have to be equal to the solid area), and the light will stay within this cone during its travel. Right?

    Cones are just convenient approximations. The solid angle can have any shape, depending on the patches. Some people use little pyramids instead of cones, or prisms (very good with triangular iso-spheres), or whatever else fits your implementation. The only important thing is that all possible solid angle entities added together over the hemisphere must always equal 2pi.

    Quote:
    to conclude:
    • So when people are talking about "points" in the computer graphics articles which use radiometry, they actually don't mean points but small patches dA around the points?


    Usually, yes.

    Quote:
  • When the articles are talking about light travelling along a line, they actually mean light travelling along a cone. In other words, when they talk about "directions" they are talking about the infinite number of directions represented by the solid angle. (approximated by the direction between the two points of the two patches)


  • Well, they're talking about light 'paths'. The cones are only used to represent incomming or exitant energy from the patch. Light travel through a participating medium (such as air) is more complex than just cones (see path tracing, or Metropolis light transport).

    Quote:
  • Also, when we (and the articles) normally talk about light traveling in rays, it is wrong. It travels along a cone. So i guess rays are just an approximation to cones - which are good approximations if the cones are very 'thin'. Right?
  • Right?

    Again, when talking about 'cones', we're talking about light incident to a diffuse differential area patch, or light leaving that patch. The 'cone model' doesn't really apply to any other situation (and certainly not to specular reflectors).
    I still have problems understanding the rendering equation (I mean, I get the general concept, but Im not clear with the different measurements involved). You said thar radiance is emitted energy. But in the rendering equation, the same name is used for the incomming and outgoing energy. I understand that we cant say that light is leaving the point x in a certain direction, but in a (infinit) number of directions (the cone). But I dont understand the incomming light. How does that cone look (Im talking about the part of the intergral that is the incomming radiance in the current direction). I understand that light leaves a point under a solid angle, but I dont understand the incomming light (the geometric picture of it). Because a point is only a point, but if I let some point send light towards a point (as a cone), that light will cover an area. But in the rendering equation, there is no "area" of points right? Sorry if my explanation is a little confused. The main thing is, when I consider the incomming light in a certain angle, should that be a cone with the top in the point or the base in that point?

    This topic is closed to new replies.

    Advertisement