About color, and atmospheric scattering

Started by
4 comments, last by playmesumch00ns 18 years, 2 months ago
I'm writing a paper on atmospheric scattering, something I've always wanted to study. However, I'm not very experienced with implementing anything like this, even something that deals with luminosity or radiance. I've been studying tons of papers and though it's a bit overwhelming, I can usually follow along and keep up with the math. Though it all seems rather high level and when it comes down to it I have no idea where to start in implementing something like this. I even have some source code dealing with it but it's pretty cryptic because of optimizations and limitations in my understanding. I only partially understand representing color with the CIE XYZ components, and Yxy. What are the X and Z components? All I get from [link] is "The spectral weighting curves of X and Z have been standardized by the CIE based on statistics from experiments involving human observers." And what's the purpose of the Yxy representation? What form of color do you usually work with when working with the sky? Luminance? Spectral radiance -- what's the difference? Understanding this will help me start out- I'm also trying to figure out how to actually implement the integration over the view ray to compute the total scattering term. If somebody wants to explain different ways of doing that, that'd be great... Any input is appreciated! Edit: I suppose you could start at the viewpoint and take several samples along the view ray, and sum them up. However, is there a faster way to do this, perhaps hardware accelerated?
Advertisement
About the approximation of the scattering integral: the method you described should work - e.g. break up the scattering path into smaller segments and then sum it up. It can be hardware accelerated as shown for example in the paper by Harris

It's a paper about cloud shading but the integral approximation is covered there - they turn it into recurrence relations which can be rendered with graphics hardware.

Depending on what you can assume, other methods could be possible. For example Hoffman in his paper describes evaluation of the integral in vertex shader but assumes constant density of the atmosphere. There's also a demo on ATI website.
Here is the paper from ATI

and the demo
Thanks, the first paper you mentioned is one that I skipped over. I have seen the ATI paper and several others including:

Nishita, “Display of the earth taking into account atmospheric scattering”
Klassen, “Modeling the effect of the atmosphere on light”
Preetham, “A practical analytic model for daylight”
Sloup, “A survey of the modelling and rendering of the earth's atmosphere”
Nielsen, “Real Time Rendering of Atmospheric Scattering Effects for Flight Simulators”


I was looking for the demo too, thanks for that. I'll see if the source code makes things any clearer.

Can anyone clear up the CIE XYZ and Yxy components for colors? Y is the luminosity, what is X and Z? In general do you work with the luminance of the sky at several points (depending on the resolution you want)?
the most straight forward approach is a monte carlo ray marcher with a single scatter. this is the first thing you want to do. then you can add multiple scatter quite straightforwardly using photon mapping. this is the brute force method. but it's simple to code and understand. later you might consider diffusion approximations. is this real time or not? the reason you use photon map for multiple scatter term is monte carlo simulations, while possible are exteremtly inefficent. they're bad enough in surface problems(path tracing) they're quite intractible for volumetric problems like the scattering equations. diffusion approximations what they do is basically evaluate irradiance only at certian locations then use diffusion interpolation for the remaining terms. the evaluation positions are usually evaluated with monte carlo/photon mapping like I said above. so implimenting that is a necessary prereq. as far as approximations in shaders I can't say I have much experience with them. so all this stuff is non real time. you can certinally benefit by implimenting them, cus they are physically correct and thus a good criteria to judge real time approximations.

to answer your question luminosity is the photometric equivilent to irradiance. dont get to worked up with the color models until you understand the basic radiometric/ photometric stuff. the real power in the choice of representation(to my understanding) ie rgb/cie/whatever is purely a real time issue. basically it boils down to what system works best under a high degree of compression of dynamic range.

Tim
The reason to use the CIE XYZ colour model is that is a standard based off science (i.e. measured experiment) rather than design issues of a computer monitor (which is essentially what the RGB model is).

The XYZ colour model is based off experiments to determine the response of the human eye to different colours.

You can calculate XYZ from spectral radiance (i.e. how much light of each wavelength hits a pixel) by weighting your SPD*** function using the XYZ coefficients published by CIE.

From there, you need to convert to RGB, normally using the sRGB colour matrix. Exactly how you do this depends properly on how you're going to display the image, but sRGB will suffice for most cases.

For the maths, there's a great reference page here:

http://www.brucelindbloom.com/index.html



*** The Spectral Power Distribution function is a curve of power against wavelength. (You can see the SPD's of different standard illuminants on the link I posted above). This is properly a cuvrve, but for rendering in most cases it's stored as a table of values over a limited part of the visible spectrum, say 300-700nm, with values spaced at 5 or 10nm intervals. Storing colour in this way is expensive, but much more accurate than using RGB

This topic is closed to new replies.

Advertisement