Reflecting colours

Started by
3 comments, last by phresnel 15 years, 3 months ago
When dealing with light, is it true that a surface will only reflect the colour that the light emits? E.g. if you have a red light, but a violet surface, then the amount of light reflected from the surface is how much blue is in the violet, and all colours other than blue do not get reflected. When working with RGB(A) and blending two colours (e.g. a light diffuse colour with the material's diffuse colour) it is generally accepted that this can be performed by multiplying each colour's channels together. For example: Red = light.R * diffuse.R; Green = light.G * diffuse.G; Blue = light.B * diffuse.B; This works fine if the light source is Red, Green, Blue, or white. For example, if a red light is shinning on a yellow surface, all channels other than red would be multiplied by 0 therefore only the intensity of the red within the yellow is reflected. Therefore the light is effectively filtering out all colours but red. But if the situation is reversed and the light is of a colour that must be represented by multiple colour channels (e.g. yellow), then both red and blue will be reflected, when I thought all colours except yellow were suppose to be filtered out (thus only reflecting the magnitude of yellow in the colour). I'll try to explain it the best I can mathematically of the formula I'm thinking of. Let's say we map the colour (the RG channels in our example of yellow light) being mapped as a spacial vector (XY). The colour red can be represented on this scale as {1,0) (X being red, Y being green), and yellow can be represented as {0.7, 0.7}. To find the amount of yellowness in the red we rotate both vectors 45* clockwise so that yellow is now {1,0} and red is {0.7, -0.7}, so now X represents an imaginary yellow channel. When we multiply these two vectors together the output returns {0.7, 0.0} so the red colour has a magnitude of 0.7 in the direction of the yellow channel. Now multiply this magnitude by the RGB of the yellow colour (0.7 * {0.7,0.7,0}) returns the final output colour of {0.49, 0.49, 0}. There problem I see of this method of calculating light is that only one colour (in various shades) is actually reflected, therefore a white light would reflect everything in grayscale. I'm thinking the method that I'm suggesting won't work with RGB colour, instead the colour could be converted to HSL (but representing the hue as a normalised RGB colour) with the saturation representing how much of the other colour channels will be filtered out. Thoughts?
Advertisement
I'm not an expert, but I think you are right: RGB is often used to do lights calculation, but HSL is prefered. In addition, HSL is good enough to represent colors, but it is not perfect in operating with them (i.e. combining), because the spectra might have a shapes that, once combined each other, give different results that with using HSL would provide. High quality renderers often use other representations, like a vector of frequencies, wich provide better results.
If red, blue and green were single frequencies, the "generally accepted" model would be correct. The problem is that there is a continuum of frequencies, each of which has contributions to R, G and B (some of which may be effectively 0), and the three numbers you end up with are only integrated values which do not contain all the information.

For instance, you could pick two frequencies that are perceived as red only and as green only. Consider a yellow light source Y1 which has a spectrum where only those two frequencies are present. If you consider all the frequencies in between those two, you'll find one where the contribution to red and green is equal. Consider a yellow light source Y2 which has a spectrum where only that one frequency is present. If you now illuminate a red object and a green object with Y1 you could get to see both colors just fine, but it could be the case that when you use Y2 on those two objects you actually see them both black. Or they could both look yellow when illuminated with Y2. The result depends on what the reflectance spectrum of the objects looks like.

Modeling this accurately doesn't seem feasible in the context of game graphics. The generally accepted "there are only three frequencies" model is probably good enough.
Your initial formula is correct (r = light.r * diffuse.r, etc.). The RGB model is only a rough approximation of colors, but, regarding that minimalism, a hell good one. It is correct that white light on a red surface produces reflecting red light, as red light onto a white surface will yield red, too.

When modeling the real spectrum, for each color there exist infinitely many combinations that yield it, so a white surface and a red light does not necessarily yield red light.

That dissimilarity is often the reason for hacks in photorealistic renderers (and actually, game graphics (the rasterized ones) is all in all a huge hack).

Have a look at the rendering equation. In my ray tracer picogen I provide a path tracer that approximates the rendering equation with RGB colors. It is actually a really good approximation (look at the title image, especially the caustics), but misses many effects of a spectral ray tracer, like e.g. diffraction. For the latter, see LuxRender's gallery.

If you are interested in more discussion on color models, I am sure the folks at ompf (actually the forum on ray tracing, even cited in RTNews and some research papers) have an ear for you.
What I wanted to add:

Colors can be extremely bright, in fact, if you do realistic image synthesis, and have no tonemapping routine, and display an image, most lighting situations won't fit onto the limited abilites of a pc screen (e.g. a pc/tv-screen will never blend you, which is actually a good thing, otherwise you would be blind after some action movies [smile]), i.e. often big parts of the screen are (near) white, other parts are (near) black.

In hardware accelerated graphics you don't have such problems, because most involved physics are hacks, and not really based on physics, or if so, they are abstracted so far away that the result is not really realistic.

As light can be near infinitely bright (I said "near infinite" as the amount of energy is limited in the known universe), a bounded model with a lower 0.0 (or 0) and an upper 1.0 (or 255) is simply wrong. Maybe a good approximation for many situations, but still wrong. The reason why they teach you in school that black and white are not colors, is actually that white could e.g. be a very bright red, or an occurence of black that you see could actually be an unlit green. In such situations, black and white more or less just represent the lower and upper end of the visible spectrum, where the visible spectrum clearly varies with the observers current situation (how "good" are his eyes, has he just opened his eyes after awakening (you know the effect when you have ever driven through a tunnel)?).

HDR models try to work around those wrong limitations, as they allow for approximately arbitrary bright/dark colors, with a much higher resolution than most integer based RGB models. But then, HDR images often, as said, don't fit onto a screen, w.r.t. brightness, hence, when using HDR models, you need tone mapping (or let e.g. gimp or other dedicatde software do the tonemapping for you). In that case, tonemapping simulates the behaviour of the eye/brain, in that it adapts or configures the "sensor" towards the observation, to try to captchure the large scale of brightnesses.

At last, just a reminder: Every color you see in the world is just light that has been reflected from the object. No light source, no colors. Plants are green because that is the only color they reflect, all other colors that do not contribute to green are absorbed at the surface. Or, to come back to your initial examples, in RGB space, a red cube is red because it absorbs the other components (if you have green light and a red cube, the cube ends up to be black).

This topic is closed to new replies.

Advertisement