Radiometry and BRDFs

Started by
16 comments, last by Bummel 10 years, 8 months ago

Sorry for the late reply I've been busy.

In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?

1

The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.

In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.

Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.

Then I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative)

Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.

-= Dave

Graphics Programmer - Ready At Dawn Studios
Advertisement

Let's see if I got this right:

The messy thing about the perfect pinhole camera model we commonly use in realtime graphics is, that in order to compensate for the fact that only an infinitesimal amount of flux can pass through the pinhole we have to assume that the sensors are infinitely sensitive, which f.i. gives us infinitely high measurement results in the case of perfect specular reflection. This also equals to sampling the outgoing radiance density in a single direction from a differential area and directly assuming it to be radiance, instead of integrating the outgoing radiance density over the solid angle a lens or a sensor area (seen through a pinhole with finite size) would project into the hemisphere of the observed differential area (in the case of a perfect pinhole this solid angle is infinitesimal, resulting in an infinitesimal amount of flux reaching each sensor except for the case of a delta function, leading back to the first statement).

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

Is that how it is?

Thanks for clarifying that for me, Dave. I appreciate you taking the time to reply.

For what my input is worth, Bummel, that seems to be pretty much the case. In the real world, instantaneous densities make sense, but only as part of a continuous density function. These approximations (point lights that have no area, directional lights that have no solid angle, perfectly reflective surfaces, perfectly focused pinhole lens, etc.) give rise to densities functions that have some impulse in one location and/or direction, but are zero everywhere else. Since in reality you always have some finite area (however small), and some solid angle (however small), and your surfaces are never perfectly reflective (even if the surface imperfections are much smaller than the wavelengths you are about, you still have diffraction and absorption), and so it is more sensible to use continuous BRDFs that model these features than it is to try to use BRDFs that are true to our unrealistic approximations.

The only part of your reply that I'm unsure about is the part about the sensor that is infinitely sensitive. In the diagrams and equations above, the sensitivity of the sensor hasn't yet entered the picture. The BRDF with the delta functions does have an infinite value in the direction (?,?+?), and so does the radiance along that direction. However, if you were to look at where that ray intersects with the sensor and integrate over the hemisphere at that point, you'd get a finite irradiance. If you then integrated all of the finite irradiances over the surface area of the pixel, you'd end up with a finite flux. Integrating by time would then get you a finite amount of energy.

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.
Our end goal is to obtain plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.
An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough.
Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.
In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.
For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.
Hope that makes sense.
-= Dave
Graphics Programmer - Ready At Dawn Studios
The BRDF with the delta functions does have an infinite value in the direction (?,?+?), and so does the radiance along that direction.

Hmmm, from my understanding it's actually the differential radiance which is infinite in this direction, which implies that we have to integrate over a set of directions to arrive at radiance itself in the first place. In the case of a pinhole camera this set of directions would be infinitesimal. As a result of this we get a finite amount of outgoing radiance for a delta function and an infinitesimal amount for the usual cases where the differential radiance is finite along all directions. That was the point where I thought that we implicitly assume infinte high sensor sensibility to compensate for that, which again would give us infinite measument results for perfect reflections which are modeled using delta functions in the BRDF, since these are the only cases which give us finite/non-infinitesimal radiance arriving at the sensor in the first place. :\


Hmmm, from my understanding it's actually the differential radiance which is infinite in this direction, which implies that we have to integrate over a set of directions to arrive at radiance itself in the first place.

I think this is where we disagree. The function we're dealing with is:

L = f(v,l)ELcos?

The f(v,l) function is the BRDF and the ELcos? term is the irradiance. The BRDF has units of sr-1, and so L will be in units of radiance, not in differential radiance. Radiance is already a differential, of flux with respect to both area and solid angle. If you integrate radiance over a set of directions, you'll end up with irradiance. I agree that this does present problems when you try to integrate back to get the total energy that has his the pixel. With the delta function, you'll get some finite energy, but with a non-delta function you'll get zero. However, in any real camera, a given point on the "retina" would receive light from a tiny solid angle of directions, and so it's not a problem.

I think another way to put it is this. If you have a velocity function (say, v(t) = at), then v(1) would give you the velocity at t=1. You wouldn't call this a differential velocity. You might call v(1)dt a differential displacement, though. Since dt is infinitesimal, the displacement is infinitesimal as well, and so you only get a displacement that makes sense if you integrate over a range of time.

By that same token, the lighting equation gives you a radiance value. You wouldn't call it a differential radiance. You might call L(A,?,t)dAd?dt a differential energy, and I think it would be correct to say that for any particular choice of position and direction, this would give you an infinitesimal amount of energy. You'd have to integrate over a finite area and solid angle and time in order to get a finite energy that makes sense.

In the case where L is some delta function, this would give you a finite energy.

In the case where L is some piece-wise function that is finite in one direction, and zero everywhere else, you'd get an infinitesimal energy.

In the case where L is some continuous function, you'd get a finite energy.

The second case is probably what you have in mind when you talk about a pinhole camera looking at a typical, non-perfectly-reflective surface. In that case, I would agree that it doesn't make sense for a sensor of finite sensitivity to register that it has picked up any energy. However, since these so-called ideal pinholes, point lights, reflective surfaces, etc., don't exist, then the third case is more realistic anyway.

If you integrate radiance over a set of directions, you'll end up with irradiance.

You are right. Although in this particular situation it's probably better to speak of radiant exitance instead of irradiance. :)

I have to overthink the whole thing again. Thank you guys for your efforts.

This topic is closed to new replies.

Advertisement