• Advertisement
Sign in to follow this  

Radiometry and BRDFs

This topic is 1641 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Greetings. I've been reading a lot about radiometry lately, and I was wondering if any of you would be willing to look this over and see if I have this right. A lot of the materials I've been reading have explained things in a way that is a little difficult for me to understand, and so I've tried to reformulate the explanations in terms that are a little easier for me to comprehend. I'd like to know if this is a valid way of thinking about it.

 

So, radiant energy is simply a matter of the number of photons involved, times their respective frequencies (times Planck's constant). SI Units: Joules.

 

Radiant flux is a rate. It's the amount of radiant energy per unit of time. SI Units: Joules per Second, a.k.a. Watts. If you have a function that represents the radiant flux coming out of a light source (which may vary, like a variable star), and you integrate it with respect to time, you'll get the total radiant energy emitted over that time. 

 

The next three quantities are densities. A brief aside about densities. Let's think about mass density, which is commonly talked about in multivariable calculus courses, as well as in many freshman calculus-based physics courses. You have a block of some solid substance. Let's say that the substance is heterogeneous in the sense that its density varies, spatially. One might be tempted to ask, "What is the mass of the block at the point (x,y,z)?" However, this question would be nonsensical, because a point has no volume, and therefore can have no mass. One can answer the question, "What is the mass density at this point?" and get a meaningful answer. Then, if you wanted to know the mass of some volume around that point, you could multiply the density time the volume (if the volume is some dV, small enough that the density doesn't change throughout it), or else integrate the density function over the volume that you care about.

 

So, in terms of radiometry, the three density quantities commonly spoken-of are irradiance, radiant intensity, and radiance.

 

Irradiance is the power density with respect to area. The SI units are W·m-2So, if you have some 2-dimensional surface that is receiving light from a light source, the Irradiance would be a 2-dimensional function that maps the two degrees of freedom on that surface (x,y) to the density of radiant flux received at that point on the surface. Exitance is similar to Irradiance, with the same exact units, but describes how much light is leaving a surface (either because the surface emits light, or because it reflects it). As with all densities, it doesn't make sense to ask, "How much power is being emitted from point (x,y) on this surface?" However, you can ask, "What is the power density at this point", and if you want to know how much power is emitted from some area around that point, you have to multiply by some dA (or integrate, if necessary).

 

Radiant Intensity is power density with respect to solid angle. The SI units are W·sr-1Unlike irradiance, which gives you a density of power received at a certain point, radiant intensity tells you the power density being emitted in a certain direction. So, a point light (for example) emits light in all directions evenly (typically). If the point light emits a radiant flux of 100W, then its radiant intensity in all directions is about 8 W·sr-1. If it's not an ideal point light, then its radiant intensity might vary in each direction. However, if you integrate the radiant intensity over the entire sphere, then you will get back the original radiant flux of 100W. Again, it doesn't make sense to ask, "How much power is being emitted in this direction?", but you can ask, "What is the power density in this direction?" and if you want to know how much power is being emitted in a small range of directions (solid angle) around that direction, then you can integrate the radiant intensity function over that solid angle.

 

Radiance is the power density with respect to both area and solid angle. The SI units are . The reason you need radiance is for the following situation. Suppose you have an area light source. The exitance of this light source may vary, spatially. Also, the light source may scatter the light in all directions, but it might not do so evenly, so it varies radially as well (is that the right word here?). So, if you want to know the power density of the light being emitted from point (x,y) on the surface of the area light, specifically in the direction (?,?), then you need a density function that takes all four variables into account. The end result is a density function that varies along (x,y,?,?). These four coordinates define a ray starting at (x,y) and pointing in the direction (?,?). Along this ray, the radiance does not change. So, it's the power (flux) density of not just a point, and not just a direction, but a ray (both a point and a direction). Just as with the other densities, it makes no sense to ask, "What is the power (flux) being emitted along this ray?" It only makes sense to ask, "What is the power density of this ray?" And since a ray has both a location and direction, the density we care about is radiance.

 

The directional and point lights that we are used to using are physically-impossible lights for which it is sometimes difficult to discuss some of these quantities.

 

For a point light, it's meaningless to speak of exitance, because a point light has no area. Or perhaps it's more correct to think of the exitance function of a point light as a sort of Dirac delta function, with a value of infinity at the position of the light, and zero everywhere else, but which integrates to a finite non-zero value (whatever the radiant flux is) over R3. In this sense, you could calculate the radiance of some ray emanating from the point light, but I'm thinking it's more useful to just calculate the radiant intensity of the light in the direction that you care about, and be done with it.

 

For a directional light, it almost seems like an inverse situation. It's awkward to talk about radiant intensity because it would essentially be like a delta function, which is infinite in one direction, and zero everywhere else, but which integrates to some finite non-zero value, the radiant flux. Even the concept of radiant flux seems iffy, though, because how much power does a directional light emit? It's essentially constant over infinite space. It's easier to talk about the exitance of a directional light, though.

 

In any case, even with these non-realistic lights, it's easy to talk about the irradiance, intensity, and radiance of surfaces that receive light from these sources, which is what we typically care about.

 

How did I do?

Edited by CDProp

Share this post


Link to post
Share on other sites
Advertisement

Ugh, so I forgot to ask my BRDF-related questions.

 

What I'm really trying to figure out here is how I would create a BRDF that represents a perfectly reflective surface, i.e. a surface where there is zero microgeometry, zero absorption, zero Fresnel, and 100% reflectance, such that each ray of light is reflected perfectly at the same angle as the incident angle. 

 

zrZyyNY.png

Here is a situation where the perfect mirror (blue) is reflecting light from a point light (orange) into a pixel (green segment, with the eye point being the green dot). Because we're dealing with perspective projection, which attempts to simulate a sort of lens, we only care about light coming in along the view vector. The orange ray is therefor the only ray we care about. I'm beginning to think, as I type this, that my difficulty in grasping this problem has something to do with the unrealistic nature of point lights that I mentioned earlier, and perhaps also the unrealistic nature of a perfect reflector. But I digress.

 

The problem I'm trying to solve is that I have this ELcos? term, which is the irradiance at the point on the surface where the ray bounces, and this makes perfect sense to me. However, now I need to create a BRDF that will reflect all of that light in one direction, and return zero in all other directions. However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcos? term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

 

Edit: I also mispelled BDRF multiple times. =P I can't fix it in the title, I don't think.

Edit2: No I didn't.

Edited by CDProp

Share this post


Link to post
Share on other sites

I'm not sure (so I'm following the thread!), but when dealing with perfectly thin rays, I think you'll end up dividing by zero steradians and calculating infinity radiance at some point. So you might have to avoid approximations (like the point lights, etc).

The pixel that you're trying to compute also has an area / subtends a solid angle from the surface's point of view. So if you've got an array of pixels on the left, which you're calculating the reflection for (without approximation), I think you'd have to use some non-zero solid angles like:

aL2t2ol.png

Share this post


Link to post
Share on other sites

Thanks, Hodgman. I have some thoughts on that idea, although I may have it wrong. I agree that the the pixel will subtend a solid angle from the point of the view of the surface point that I'm shading. However, I am not certain that it matters in this case. Because we are rendering a perfectly focused image, I believe that each point on our pixel "retina" can only receive light from a certain incoming direction. Here's what I mean (I apologize if a lot of this is rudimentary and/or wrong, but it helps me to go through it all).

 

If you have a "retina" of pixels, but no focusing device, then a light will hit every point on the retina and so each pixel will be brightened:

 

LWDzWzd.png

If you add a focusing device, like a pinhole lens, then you block all rays except those that can make it through the aperture:

 

v4VE4Nc.png

So now, only one pixel sees the light, and so the light shows up as it should: as a point. We now have a focused image, albeit an inverted one. If you widen the aperture and put a lens in there, you'll catch more rays, but they'll all be focused back on that same pixel:

 

15gIKLe.png

And so I might as well return to the pinhole case, since it is simpler to diagram. I believe that having a wider aperture/lens setup adds some depth of field complications to the focus, but for all intents and purposes here, it can be said (I think) that a focusing device has the effect of making it so that each pixel (and indeed, each sub-pixel point) on the retina can only receive light from one direction:

 

GYCUk9x.png

The orange ray shows what direction the pixel in question is able to "see", and any surface that intersects this ray will be in the pixel's "line of sight." Each pixel has its own such line of sight:

 

XxnK0V6.png

With rasterization, we have things sort of flipped around. The aperture is behind the retina, but the effect is more or less then same. If I put the retina on the other side of the aperture, at an equal distance, I get this:

 

16oL9IU.png

Now we can see the aperture as the "eye position", the retina as the near plane, etc. The orange rays are now just view vectors, and they are the only directions we care about for each pixel. The resulting image is the same as before, except it has the added bonus of not being inverted (like what a real lens would do).

 

So with that said, here is what happens if I redraw your diagram, with 5 sub-pixel view vectors going through a single pixel:

 

WdSzccc.png

So, the single pixel ends up covering the entire light blue surface. You can see that view vectors form a sort of frustum, when confined to that pixel.

 

I've also added a green point light, with 5 rays indicating the range of rays that will hit that light blue patch. All 5 of those green rays will end up hitting the retina somewhere, but only one of those rays comes in co-linearly with one of the orange "valid directions". 

Edited by CDProp

Share this post


Link to post
Share on other sites
I suppose this is another area where approximations are tripping me up, as there is no pinhole that will only let one ray through, nor any perfect lens that can ensure that every point on the retina receives light from just one direction. In reality, there is always a tiny solid angle around that ideal direction from which the point on the retina receives light.

Share this post


Link to post
Share on other sites

However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcos? term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

 

The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where, 

 

1

 

and the second condition to conserve energy will require that,

 

2

 
thus, f must equal to
 
 
and the third condition is easily verified.
 
P.S. Sorry, I tried to use latex but the forum blocked me from using latex images. I tried using the equation tags but the latex wouldn't parse so I just posted the link instead sad.png

Share this post


Link to post
Share on other sites

I suppose this is another area where approximations are tripping me up, as there is no pinhole that will only let one ray through, nor any perfect lens that can ensure that every point on the retina receives light from just one direction. In reality, there is always a tiny solid angle around that ideal direction from which the point on the retina receives light.

 

The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene.

 

In your diagrams you are simply point sampling a contiguous signal of that flux.

Share this post


Link to post
Share on other sites

Thanks, David. This is tremendously helpful; I just have a few things that I'm still confused about.

 

In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?

 

In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.

 

Also, I'm somewhat curious why we store radiance values in our frame buffers (well, luminance, I guess). To my naive mind, it seems like the wrong quantity. When I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative). The effect is permanent, so it accumulates over time. So, what the film seems to be recording is energy, not some flux density. I understand why calculating the radiance is necessary as an intermediate step -- we have a specific ray that we're concerned about (the view vector) and we need to know the flux density along that ray. But shouldn't we be converting that to energy by multiplying it by some dAd?dt, where dt is the exposure time?

Share this post


Link to post
Share on other sites

But shouldn't we be converting that to energy by multiplying it by some dAd?dt, where dt is the exposure time?

Yeah, I think this is one of the approximations that exists in a common renderer. If you assume that dAd?dt is constant for every pixel, then this is basically the same as just scaling either your final rendered image (or scaling all of your source light values) by that constant number. You could also say that we're assuming that dAd?dt = 1 wink.png

 

In traditional renderers, the "radiance" frame buffer is directly displayed to the screen, but in HDR renderers, you could say that the tone-mapping step is responsible for transferring that radiance onto a film/sensor.
 

Many HDR renderers do multiply the final HDR frame-buffer with a constant number, often called "exposure", which you could say is dt.

The solid angle formed by your "pinhole" / aperture, depends on the F-stop of your camera. Lower F-stop = larger aperture = larger d?.

Usually the shutter-speed and F-stop values are simulated via this single "exposure" variable -- so it represents d?dt.

In some HDR tone-mappers, vignetting is simulated by multiplying the HDR frame-buffer with a spatially varying constant -- the constant is 1 in the centre of the screen, but is reduced at the edges. You could say that this "vignetting constant" is d? -- pixels at the edge of the sensor are shadowed such that from their point of view, the aperture is smaller.

So in most HDR renderers, I think d?dt is kind of used per pixel, but with arbitrary / fudged values, rather than physical values.

So that just leaves dA, which is constant for every pixel, so we just approximate it away by pretending it equals 1 -- to be physically correct, we could pretend our sensor has a 'gain' or amplification factor of 1/dA, which would cancel it out wink.png

Edited by Hodgman

Share this post


Link to post
Share on other sites

Oh right, that makes sense. I guess with most Tone Mapping operators that have some sort of exposure control, it's really only relative values that make a difference anyway. 

Edited by CDProp

Share this post


Link to post
Share on other sites

Sorry for the late reply I've been busy.

 

In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?

 

1

 

The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.

 

In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.

 

Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.

 

Then I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative)

 

Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.

 

-= Dave

 

Share this post


Link to post
Share on other sites

Let's see if I got this right:

The messy thing about the perfect pinhole camera model we commonly use in realtime graphics is, that in order to compensate for the fact that only an infinitesimal amount of flux can pass through the pinhole we have to assume that the sensors are infinitely sensitive, which f.i. gives us infinitely high measurement results in the case of perfect specular reflection. This also equals to sampling the outgoing radiance density in a single direction from a differential area and directly assuming it to be radiance, instead of integrating the outgoing radiance density over the solid angle a lens or a sensor area (seen through a pinhole with finite size) would project into the hemisphere of the observed differential area (in the case of a perfect pinhole this solid angle is infinitesimal, resulting in an infinitesimal amount of flux reaching each sensor except for the case of a delta function, leading back to the first statement).

 

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

 

Is that how it is?

Share this post


Link to post
Share on other sites

Thanks for clarifying that for me, Dave. I appreciate you taking the time to reply.

 

For what my input is worth, Bummel, that seems to be pretty much the case. In the real world, instantaneous densities make sense, but only as part of a continuous density function. These approximations (point lights that have no area, directional lights that have no solid angle, perfectly reflective surfaces, perfectly focused pinhole lens, etc.) give rise to densities functions that have some impulse in one location and/or direction, but are zero everywhere else. Since in reality you always have some finite area (however small), and some solid angle (however small), and your surfaces are never perfectly reflective (even if the surface imperfections are much smaller than the wavelengths you are about, you still have diffraction and absorption), and so it is more sensible to use continuous BRDFs that model these features than it is to try to use BRDFs that are true to our unrealistic approximations.

 

The only part of your reply that I'm unsure about is the part about the sensor that is infinitely sensitive. In the diagrams and equations above, the sensitivity of the sensor hasn't yet entered the picture. The BRDF with the delta functions does have an infinite value in the direction (?,?+?), and so does the radiance along that direction. However, if you were to look at where that ray intersects with the sensor and integrate over the hemisphere at that point, you'd get a finite irradiance. If you then integrated all of the finite irradiances over the surface area of the pixel, you'd end up with a finite flux. Integrating by time would then get you a finite amount of energy.

Edited by CDProp

Share this post


Link to post
Share on other sites

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

 
They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.
 
Our end goal is to obtain  plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.
 
An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough. 
Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.
 
In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.
 
For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.
 
Hope that makes sense.
 
-= Dave

Share this post


Link to post
Share on other sites
The BRDF with the delta functions does have an infinite value in the direction (?,?+?), and so does the radiance along that direction.

Hmmm, from my understanding it's actually the differential radiance which is infinite in this direction, which implies that we have to integrate over a set of directions to arrive at radiance itself in the first place. In the case of a pinhole camera this set of directions would be infinitesimal. As a result of this we get a finite amount of outgoing radiance for a delta function and an infinitesimal amount for the usual cases where the differential radiance is finite along all directions. That was the point where I thought that we implicitly assume infinte high sensor sensibility to compensate for that, which again would give us infinite measument results for perfect reflections which are modeled using delta functions in the BRDF, since these are the only cases which give us finite/non-infinitesimal radiance arriving at the sensor in the first place. :\

Edited by Bummel

Share this post


Link to post
Share on other sites


Hmmm, from my understanding it's actually the differential radiance which is infinite in this direction, which implies that we have to integrate over a set of directions to arrive at radiance itself in the first place.

 

I think this is where we disagree. The function we're dealing with is:

 

L = f(v,l)ELcos?

 

The f(v,l) function is the BRDF and the ELcos? term is the irradiance. The BRDF has units of sr-1, and so L will be in units of radiance, not in differential radiance. Radiance is already a differential, of flux with respect to both area and solid angle. If you integrate radiance over a set of directions, you'll end up with irradiance. I agree that this does present problems when you try to integrate back to get the total energy that has his the pixel. With the delta function, you'll get some finite energy, but with a non-delta function you'll get zero. However, in any real camera, a given point on the "retina" would receive light from a tiny solid angle of directions, and so it's not a problem.

Share this post


Link to post
Share on other sites

I think another way to put it is this. If you have a velocity function (say, v(t) = at), then v(1) would give you the velocity at t=1. You wouldn't call this a differential velocity. You might call v(1)dt a differential displacement, though. Since dt is infinitesimal, the displacement is infinitesimal as well, and so you only get a displacement that makes sense if you integrate over a range of time.

 

By that same token, the lighting equation gives you a radiance value. You wouldn't call it a differential radiance. You might call L(A,?,t)dAd?dt a differential energy, and I think it would be correct to say that for any particular choice of position and direction, this would give you an infinitesimal amount of energy. You'd have to integrate over a finite area and solid angle and time in order to get a finite energy that makes sense.

 

In the case where L is some delta function, this would give you a finite energy.

In the case where L is some piece-wise function that is finite in one direction, and zero everywhere else, you'd get an infinitesimal energy.

In the case where L is some continuous function, you'd get a finite energy.

 

The second case is probably what you have in mind when you talk about a pinhole camera looking at a typical, non-perfectly-reflective surface. In that case, I would agree that it doesn't make sense for a sensor of finite sensitivity to register that it has picked up any energy. However, since these so-called ideal pinholes, point lights, reflective surfaces, etc., don't exist, then the third case is more realistic anyway.

Share this post


Link to post
Share on other sites

If you integrate radiance over a set of directions, you'll end up with irradiance.

You are right. Although in this particular situation it's probably better to speak of radiant exitance instead of irradiance. :)

I have to overthink the whole thing again. Thank you guys for your efforts.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement