# How to: Ray tracing the light spectrum

## Recommended Posts

##### Share on other sites
Hey slayemin,
I was thinking of a similar idea when I made
a raytracer some time ago.
I have a few questions and some suggestions.

For the questions:
1. Are you using pointlights or light volumes?
2. As you are computing you need to chop the spectrum into discrete chunks,
how big a the steps between this chunks?
3. Does it suffice to save the point and color in forward casting? I mean
AFAIK you need to save at least the direction of the ray too for local
light models like PHONG.

And now some suggestions:
It is a very good Idea to support light intensities as light energy is
getting lost when travelling far...

Finally to say a word to forward an backward casting:
You could avoid the approach of forward casting.
Let's see an example:
You got an intersection of a backward-casted ray with an object that has
a diffuse surface and know your scene has some refracting objects in it.
So for every refracting object you locate it an send rays from the intersection
you found to the object. Doing the refraction backwards is no problem at all and
maybe you will save some rays compared to forward casting but the problem is
that you finally have to match your refracted rays to a light source that might
have emitted them. At this point light-volumes might help.

##### Share on other sites
Quote:
 Original post by Spynacker1. Are you using pointlights or light volumes?

I am using a point light with a position, direction, and defined wavelengths. The XML I posted has all of the lighting information I needed to cast the color spectrum.

Quote:
 2. As you are computing you need to chop the spectrum into discrete chunks, how big a the steps between this chunks?

If I understand the question correctly, you're asking about how I break a light beam into its diffracted components & colors. The light beam is a ray of light which contains pairs of wavelengths which specify the range. I follow the light ray from the original light source position in its direction until it intersects with a geometry. If the light ray hits a geometry, I get an intersection record which contains the hit position, the geometry ID, the material of the geometry, etc. I find the closest intersection and then look at the transparency of that object. If there is no transparency, then I don't need to worry about refraction and instead compute the color for that point given the range of wavelengths present in the light beam.

Otherwise, I need to calculate refraction. I know the position of my closest intersection and the refractive index of my material. I then need to look at my light beam and figure out which wavelengths are present. I create a for loop which goes through all of the wavelengths and create new light rays. Using white light, I'd create 850 - 380 = 470 new light beams. The light ray position is the hit position. The light ray direction is determined by refractive index and Sellmeiers Equation (which takes wavelength as function value), and the new wavelength of the light contains a single wavelength. I then repeat this procedure until each light ray intersects with a geometry which has no transparency (This is a nice function to do recursively). So, "chunk size" would be one new ray for every wavelength present in the light beam.

Quote:
 3. Does it suffice to save the point and color in forward casting? I mean AFAIK you need to save at least the direction of the ray too for local light models like PHONG.

Only if you're trying to calculate specular highlights. Since the spectrum is being projected onto a geometry from the light source, we don't need to worry about calculating diffuse lighting. The projected colors will always be on a surface which has a normal within 90 degrees of the light. I haven't tested it yet, but I wonder if the forward-casted light ray intensities would naturally follow the cosine curve in Phong shading...

Quote:
 And now some suggestions:It is a very good Idea to support light intensities as light energy isgetting lost when travelling far...

Yeah, that's light attenuation which follows the inverse square law. What I was talking about is the intensity of a specific band of color. Currently, I'm not modeling it and just assume that all bands are of the same light intensity. It's actually a bad assumption to make. Take a look at this picture of hydrogen light:

Notice that the red band is many times brighter than the other bands. This has a significant impact on the overall color of the hydrogen light, making it appear reddish purple.

Quote:
 Finally to say a word to forward an backward casting:You could avoid the approach of forward casting.Let's see an example:You got an intersection of a backward-casted ray with an object that has a diffuse surface and know your scene has some refracting objects in it.So for every refracting object you locate it an send rays from the intersectionyou found to the object. Doing the refraction backwards is no problem at all andmaybe you will save some rays compared to forward casting but the problem isthat you finally have to match your refracted rays to a light source that mighthave emitted them. At this point light-volumes might help.I'd like to hear your opinion about that.

I think forward casting would be more efficient than backward casting for this scenario. I think it would end up being more computationally expensive and more mathematically complicated. If you tried to do backward casting, you'd have to look at every visible point in your scene and figure out if it can see a point light and then figure out what color that point should be based on refraction. The problem with my method is that if I create a scene which has a large number of color points, I have to do a distance calculation for each one. This works, but the run time is somewhere around O(num_light_points * num_visible_points), and it also does a square root calculation for distance. My partner on the project had a workable solution where he projects the colors into the geometry texture instead of onto the geometry. This also works, but only if the geometry has a texture. I didn't like that caveat, so I came up with the brute force projection. I think maybe the better solution would be a hybrid between the two, where a light ray is projected into the geometry texture if a texture is available, or otherwise stored in a sorted look-up table.

##### Share on other sites
Have you thought of incorporating things like phase and polarity? If done properly, this would allowed computing interference at different frequencies, which is something I've never seen in computer graphics. Here are two examples:

##### Share on other sites
Really cool stuff (:

Thanks for sharing

##### Share on other sites
Maybe you could do some deferred rendering:
It will take some restructuring of the code but
you could first render backwards from your cam
and save the intersectionpoint and material for each pixel.
Then you start forward tracing taking only those rays in
account that end near your saved intersections.
This probably needs a big amount of memory but has some
and second you could save the data of your first pass and
change the light scene as needed without rendering again from
the camera.
This was an annoying problem when I made some scenes and had
to change things in lighting.

##### Share on other sites
@Alvaro: shouldn't those cases be what the thin film was designed for in the first place (I am not sure though)?

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628354
• Total Posts
2982236

• 10
• 9
• 11
• 24
• 11