Spectral Path Tracing
Spectral path tracing is essentially the same as normal path tracing, except that instead of working with RGB colors, you work with wavelengths in the visible spectrum (380nm to 780nm). By doing this, you will obtain a spectral power distribution for each pixel, which describes the amount of incoming radiance, per wavelength. To obtain a color from this, you first convert it to a CIE XYZ color - this is done by integrating the distribution over a color-matching curve (this curve varies among people and species, an average "human" one is available for free at the CIE website at 1nm intervals). I am currently sampling wavelengths at 5nm intervals, which seems good enough.
CIE XYZ is a "perceptual color" meaning that it is independent of the device being used to display it. As such, you need to convert it to device color before being able to get actual colors out of it. There are many different ways to do this, the simplest one is to assume a given "color system" which encodes a particular RGB gamut (the range of colors and intensities that can be displayed). The simplest color system is the CIE one, but there are others, like the HDTV color system, NTSC, PAL, etc...
Remember that normalized CIE XYZ colors do not encode any absolute brightness information, but this can be obtained rather easily. Indeed, you can obtain the pixel's total radiance by summing up the per-wavelength radiance for each wavelength, and multiply your resulting RGB color by that. This will yield a large luminance range. At this point, you will probably need to tone-map it, as even in simple scenes the per-pixel brightness can vary enormously. Then, you can gamma-correct the render (using the gamma-correction settings of your chosen color system), and you are finished!
There are no real advantages to using spectral rendering when your materials do not depend on wavelength beyond what can be emulated by standard RGB rendering, the only difference being slightly more realistic color interaction, so the attached renders are more of a proof of concept that anything else. However, for materials that change depending on the wavelength, spectral rendering is an absolute must - this includes dispersion, fluorescence, iridescence, etc... It also tends to simplify the material tweaking, since you no longer have to rely on "magic numbers" for your light brightnesses, for instance - just use a physically based emission spectrum, such as a black-body curve.
The light source used in the middle render was a black-body radiator with a temperature of 3500K (so it appears yellow-ish, if the temperature increased, it would tend to white, then to light blue). The left and right renders had a light source with a temperature of 5500K. The white materials were just a flat spectral power distribution - which results in white under the CIE color system - and the other materials were colored using an empirical distribution following a bell curve with the following equation:
Where is the dominant wavelength in nanometers (so setting that to 650 would give a red object) and is the fraction of incoming radiance reflected for this wavelength (as you can see, it never exceeds 1), is some global albedo constant between 0 and 1, and 22.63 is an empirical constant of mine. Choosing different constants would either make the distribution too spiky, which is no good as the color-matching curve has a limited precision, or it would smear the peak across far too many wavelengths, resulting in bizarre spectral interaction. Of course, you can combine more than one peak by averaging multiple such equations, and you can even work out better, possibly physically based, distributions.
The left and right renders have a glass refractive material (with Sellmeier-based refractive indices), and a Cook-Torrance specular material (not sure I got Cook-Torrance quite right, especially regarding energy-conservation). Dispersion is not visible here because with area lights, the dispersed spectrums tend to sum up to white, with only barely visible, very thin blue/red fringes at the edges. A point light would work significantly better, but cannot be done efficiently just yet - I need to implement direct light sampling.