Quote:Original post by ZaiPpA
I see that the reasons for radiometry being as it is, is because of physics - which is good since that is what we're trying to simulate. [smile]
Good. But right from the beginning, let's just get rid of one more terminology confusion: radiometry/photometry versus global illumination. As I pointed about in my last post, radiometry and photometry are just ways to represent electromagnetic radiation (for the former) or visible light (for the latter). They don't dictate the use of eg. differential area instead of points, or surface interreflection. You could very well write a completely old-school Gouraud renderer, without solid angles, differential area, or cones of light - but still use radiometric or photometric units.
What you are talking about is called
global illumination (or GI for short). Good GI systems are usually implemented using photometry (radiometry is rare), because this is required to make them more physically correct. However, you could also write a GI solver without using radio/photometry at all.
Quote:Original post by ZaiPpA
Rather than points being lit op (which has no area), areas are being lit up in radiometry/physics (there is no such thing as a point in reality which can be lit up (a point is smaller than an atom! even a quark!)).
The way you can talk about points anyway, is to choose a very small (infinitesimal) surface-area dA(x) centered around the point x. (a "patch"). Then you have an area.
Light is transferred between patches (or larger areas), not between points!
Is this correct?
Absolutely correct.
Quote:
So it sounds like photons always have slightly different directions (even in a lazer?). So i guess it is extremely unlikely (impossible?) that two photons have the exact same direction in physics? (if so, i understand... mathematically "exact" is after all exact to 'an infinite number of decimals'...)
Well, it depends. Theoretically, ie. in an ideal world without participating media and gravity, photons can have perfectly synchronous and parallel paths. In our world, they can't. But that's not really the point here.
GI is a mathematical model to compute the approximate
diffuse interaction between surface patches. The emphasis lies on diffuse. When light (any light, from the sun, over the sky, to an extremely collimated laser beam) hits a perfectly diffuse reflector, then this light will be equally spread over the entire hemisphere when reflected back. Such ideal diffuse reflectors are impossible in reality, as you will always have a certain amount of specularity. Better GI models support surfaces with non-isotropic reflection patterns, by the use of BRDFs (bi-directional reflectance distribution functions). But eventough not perfect, the model is still pretty good at producing convincing images.
Now, the whole differential area model, being a diffuse light model,
cannot be used on eg. a collimated laser beam. As you pointed out, laser light hitting a surface will not radiate from the entire hemisphere. Its incidence cone will be very small (although not a line, since perfectly parallel rays cannot exist). So in these situations, the mathematical approximation breaks down. However, it still holds up on the
reflected laser light from the (diffuse) surface.
Usually, a modern GI solver will include several energy transfer models, which are applied in different passes. You will, for example, start by distributing energy from point lights (which have no surface area) by using standard raytracing or shadow mapping. You can ditribute laser light in a similar way. Then you would continue with sun light (which is more or less directional) and maybe have a separate pass for skylight (while being emitted from a virtual hemisphere, it can be highly optimized by using other lighting models).
Then, you can add the energy from area (surface) lights, which follow the standard differential area + form factor energy transfer model (ie. the radiosity equations).
And finally, once you added all direct light as described above, you start the indirect lighting passes: the diffuse reflections between surface patches. This will be the computationally most expensive part (but also the most important for reallism).
Quote:
One thing i'm a bit uncertain of, though.. For the light to travel along a cone, the projected area (solid angle) of the patch dA projected onto the hemisphere, should be 'round' and thus dA should be round as well.
It is round. Again, let's set the terminology straight:
* something projected onto the
hemisphere over a patch (another patch, or light cones) -> solid angle.
* This solid angle projected down onto the differential area patch -> projected area.
Quote:
Hmm... Oh, wait... By "light traveling in a cone", you just mean that the photons 'spread out from each other the further distance they have traveled' right? (which follows from the fact that a solid angle represents all the directions from x to every point on the solidangle-area).
Yes. That's why you have the inverse square rule on light intensity over distance. The solid angle of the light cone becomes larger with distance. So the same energy is spread over a larger solid angle, the farther away you are.
Quote:
So no matter if the solid angle is round or not, you can always choose a cone which contains the solid-angle (the intersection between the cone and the hemisphere doesn't have to be equal to the solid area), and the light will stay within this cone during its travel. Right?
Cones are just convenient approximations. The solid angle can have any shape, depending on the patches. Some people use little pyramids instead of cones, or prisms (very good with triangular iso-spheres), or whatever else fits your implementation. The only important thing is that all possible solid angle entities added together over the hemisphere must always equal 2pi.
Quote:
to conclude:- So when people are talking about "points" in the computer graphics articles which use radiometry, they actually don't mean points but small patches dA around the points?
Usually, yes.
Quote:
When the articles are talking about light travelling along a line, they actually mean light travelling along a cone. In other words, when they talk about "directions" they are talking about the infinite number of directions represented by the solid angle. (approximated by the direction between the two points of the two patches)
Well, they're talking about light 'paths'. The cones are only used to represent incomming or exitant energy from the patch. Light travel through a participating medium (such as air) is more complex than just cones (see
path tracing, or
Metropolis light transport).
Quote:
Also, when we (and the articles) normally talk about light traveling in rays, it is wrong. It travels along a cone. So i guess rays are just an approximation to cones - which are good approximations if the cones are very 'thin'. Right?
Right?
Again, when talking about 'cones', we're talking about light incident to a
diffuse differential area patch, or light leaving that patch. The 'cone model' doesn't really apply to any other situation (and certainly not to specular reflectors).