Sure. Let K be your extinction coefficient, S your scattering coefficient, P(theta) your scattering phase function. Then the probability that a light ray travelling a distance d in an isotropic, homogeneous medium is scattered is:
And the ray is scattered with intensity (S / K) P(theta) in direction theta with respect to the incident direction. You can use any normalized phase function for P.
You can also sample the first probability distribution function to sample the average distance travelled by the ray. To do this, first obtain the CDF from the PDF:
PDF = K e^(-Kd)
CDF = integral of PDF = 1 - e^(-Kd)
-Kd = log(1 - CDF)
d = -log(1 - CDF) / K
(this is called inverse transform sampling)
Now select a uniform random number between 0 and 1 for your CDF variable, and compute d, the distance, and you're done. Depending on your extinction coefficient (the higher, the thicker the fog) most light rays will get scattered after only travelling a few metres in the medium, try graphing the last formula with respect to the CDF variable for various values of K and see at what distance most light rays will get scattered. Note K = 0 corresponds to a non-absorbing, non-scattering medium and is, in fact, a special case (the math sort of works out, with the distance travelled always being infinity, even though the PDF and CDF are invalid, but you'll want to handle it properly in your code, obviously)
After being scattered, you need to recurse until the ray is absorbed or otherwise interrupted if you want a correct solution. For extremely thick fog, this will take forever.
All that said, good luck implementing a truly physically based global scattering method in a realtime application. You may prefer to consider cheaper, approximate alternatives, especially since you can usually use a screen-space solution for fog and haze (as outdoor scattering is predominantly forward scattering).
Edited by Bacterius, 03 February 2013 - 12:10 AM.
The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
- Pessimal Algorithms and Simplexity Analysis