• Advertisement
Sign in to follow this  

An approach to rendering fog volumes, and a question about mixing fog volumes with global fog

This topic is 2238 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone,
I thought I'd share an approach I thought of for rendering fog volumes. A lot of it is pretty similar to regular fog volume rendering approaches but there are a few differences and optimizations which I haven't seen in common fog volume rendering tutorials (I'm sure they're not new ideas though). At the very least hopefully this will be helpful to people trying to get volume fog working. At the end, I also raise the issue of mixing fog volumes with global fog.

(Note that I unfortunately have been too busy to actually test much of this, so I can't say how it works in practice, but in theory I think it should work. Feel free to point out any errors I've made. Perhaps someone who has used a similar approach can say how well it works.)

First I'll state my goal: to come up with a way of assigning fog volumes to different regions of levels (e.g. a different fog volume per room) and rendering them in a semi-realistic way. Some important points which may make this different than some other fog volume tutorials:
1) Since there may be many fog volumes (e.g. looking through several rooms of differently colored fog), I don't want to be making two passes (thus changing render targets twice) per fog volume.
2) Fog volumes should be able to "stack" correctly - that is, rendering 1 large fog volume versus 2 smaller fog volumes occupying the same space should give the exact same result.

Firstly, my model of fog. I read some papers on how light scatters through volumes, but I'm not particularly familiar with the physics of light, so I settled on the following, which I think/hope is at least a rough approximation: The fog volumes are of homogeneous color and density. As a ray passes through fog, the ray's color is "replaced/converted" from its current color to the fog color at a rate based on the fog density. I believe this has to do with light being absorbed/emitted, but I'm not sure how to word that in physical terms. Anyway, this is described by the following differential equation:
[eqn]C^\prime(t)} = (F_c - C(t))F_d,\quad C(0) = C_0[/eqn]
where C(t) is the color of a ray with initial color C[sub]0[/sub] after passing a distance t through fog with color F[sub]c[/sub] and density F[sub]d[/sub]. If we solve this differential equation we get:
[eqn]C(t) = F_c + (C_0 - F_c)e^{-tF_d}[/eqn]
So to render fog volumes, we simply take the current fragment color at each point on the screen behind/in the fog volume, figure out how thick the fog is, and apply this formula.

(Side note: Instead of using a constant for color or density, it's also possible to use a simple function such as F[sub]c[/sub](t) = F[sub]c1[/sub]t + F[sub]c2[/sub](w - t), which would vary the fog color linearly and still get a relatively simple result for C(t) (unfortunately it looks like if you use both linearly varying density and color you get something pretty nasty for C(t)). This could be used to, for example, make fog volumes which smoothly fade to 0 density, or to make a room with red fog on one side and blue fog on the other.)

Now we need to address the two points I mentioned above. Firstly, we don't want to do a separate pass for each fog volume. We can achieve this by putting a few constraints on the volumes we render. Normally volume fog is done in two passes - first the back faces are drawn, then the front faces are drawn and the difference is subtracted. Instead, let's just say that a fog volume consists of 6 bounding planes, bounding a convex volume. Since the volume is convex, it's very easy to compute the intersection of a ray with this volume (there will be 2, when the ray enters and when it exits). This can be done using the following algorithm (the plane normals are assumed to face outward from the volume):

tFirst = -infinity
tLast = infinity
for each plane p:
t = intersect(ray, p)
if intersection was front facing:
tFirst = max(t, tFirst)
if intersection was back facing:
tLast = min(t, tLast)
if tLast < 0, no intersection
if tLast < tFirst, no intersection
else, distance = tLast - max(tFirst, 0)

So instead of rendering back face and front faces separately, we just render the front faces and manually compute the intersection with the back faces by passing the plane parameters into the shader, so we can get the fog volume thickness in one pass (adjusting this of course if the depth buffer contains values closer than the end of the fog volume).

However, this doesn't entirely solve the problem. Remember that the new pixel color is being written to the color buffer, but in order to compute the new color we need to sample the color buffer, so normally we'd need to use "ping pong" buffers. However, this issue can be easily avoided using the correct blending mode.

Instead of using ping pong buffers, keep the color buffer as the render target, but bind the depth buffer as a texture in order to compute fog thickness. Compute the distance [font=courier new,courier,monospace]t[/font] through the fog that the ray passes. We want the following:
C_out = F_c + (C_in - F_c)*e^(-t*F_d)
This is equivalent to:
factor = e^(-t*F_d)
C_out = F_c*(1 - factor) + C_in*factor

Notice this is in the form [font=courier new,courier,monospace]C_out = A + B*C_in[/font]. Thus, do the following: instead of outputting the final color C(t) from the pixel shader, output [font=courier new,courier,monospace]RGB = F_c*(1 - factor), A = factor[/font] and draw using the blend mode [font=courier new,courier,monospace]src_color*ONE + dst_color*SRC_ALPHA[/font] - i.e. [font=courier new,courier,monospace]glBlendFunc(GL_ONE, GL_SRC_ALPHA)[/font]. Since [font=courier new,courier,monospace]dst_color[/font] is just [font=courier new,courier,monospace]C_in[/font], using this blend mode results in the final color being [font=courier new,courier,monospace](F_c*(1 - factor)) + (C_in*factor)[/font], which is just what we want. So we can render each fog volume in just one pass!

Now we just need to make sure of the second point, which is stacking fog volumes. To check for this, redefine C(t) such that it takes C[sub]0[/sub] as a parameter; that is,
[eqn]C(t,c) = F_c + (c - F_c)e^{-tF_d}[/eqn]
Consider the case of a single large fog volume of thickness t versus two stacked smaller fog volumes of thickness t[sub]1[/sub] and t[sub]2[/sub] where t = t[sub]1[/sub] + t[sub]2[/sub] (the sum of the two equaling the single large one). In the first case, we compute the final color by casting a ray through the volume: C[sub]final[/sub] = C(t, C[sub]0[/sub]). In the case of two volumes, we first pass the ray through the first volume and get C[sub]intermediate[/sub] = C(t[sub]1[/sub], C[sub]0[/sub]), then pass the resulting ray through the second volume to get C[sub]final[/sub] = C(t[sub]2[/sub], C[sub]intermediate[/sub]); that is, C(t[sub]2[/sub], C(t[sub]1[/sub], C[sub]0[/sub])). We just need to verify that C(t, C[sub]0[/sub]) = C(t[sub]2[/sub], C(t[sub]1[/sub], C[sub]0[/sub])). This is easy to do:
[eqn]C(t_2, C(t_1, C_0)) = F_c + ((F_c + (C_0 - F_c)e^{-t_1F_d}) - F_c)e^{-t_2F_d}\\
= F_c + (C_0 - F_c)e^{-t_1F_d}e^{-t_2F_d} = F_c + (C_0 - F_c)e^{-(t_1+t_2)F_d}\\
= F_c + (C_0 - F_c)e^{-tF_d} = C(t, C_0)[/eqn]
So this means we can split up fog volumes and stack them and we'll get the same results as if we had one large fog volume. If you have a level filled with lots of fog, then perhaps this is a good use of BSP trees - the fog volumes should be static, and the fog volumes can be divided into convex 6-sided regions and will always be rendered in the appropriate order.

Last is the question of mixing global scene fog with fog volumes. For example, if you have an indoor and outdoor scene, you might want a single type of fog everywhere outside but no fog, or different fog volumes, indoors. Obviously simply rendering global fog and then applying fog volumes on top of it won't produce accurate results - you can't get regions with no fog that way, and the two types of fog won't stack correctly.

One solution would be to split up the outdoor fog using the same fog volumes approach as indoors and then just have "infinite" fog volumes at the edges of the scene (just leave off some of the bounding planes - they'd still be convex so I think it'd work).

Another idea that I haven't had the chance to give too much thought to: I wonder if it would be possible to use some form of "inverse fog volumes"; that is, you'd render the full scene fog, and then somehow remove it in regions. You'd probably want to apply fog in a secondary color buffer and blend it at the end, because you could potentially lose some scene color information if you directly apply the scene fog pass to the color buffer, and you'd want to be able to restore this information when "removing" regions of fog. Has anyone ever tried something like this?

Anyway, I'd be interested in hearing your thoughts.

Share this post


Link to post
Share on other sites
Advertisement
If you're targetting DX11 there's a far easier way to make this work. Read the DX11 Order Independent Transparency Linked List articles. Render the front and back faces marking them as such as you insert the fragments into their respective linked list. Then when you sort the fragments in the second pass just perform the blending. Also I believe the blending function you want is this: http://ideone.com/PVXdB . It handles multiple transparent volumes and returns the same result no matter if you cut the volumes in half. I actually made a thread a while back when looking for how to do it and found the wikipedia article on transparency which allowed me to derive the algorithm.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement