Distortion/Heat Haze FX

This topic is 2155 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I'm trying to figure out how heat hazes and other distortion effects are normally done.  I have some ideas and did some research, but am still left with questions.

I can see how I'd do it if I had a scene, and then did a post processing effect on it by rendering some shapes over it with a heat haze shader.

But then what if I have a transparent object in front of a fire.  I could have depth test on to have solid portions of the scene occlude areas that shouldn't be distorted, but then if I have a see through bottle with a fire behind it, I would want the bottle itself to not be distorted.  So in that case, would I be rendering to a texture, but depth sorting transparent as well as heat haze objects?  Since I'm using a deferred shader I would render the solid scene, light it, then start rendering the transparent and heat haze objects that are sorted in depth order?

I'm wondering what other ways there are, if any.  I saw the heat haze effect in Doom 3 and Quake 4 also, and I doubt they were rendering the scene to a texture back then. (Although by Quake 4 they may have been doing that because the double vision effect was most likely done with some post processing unless they rendered everything twice)

Then there's water.  If I have a giant plane of water, it's a bit hard to depth sort that.

Edited by ill

Share on other sites

For a differed rendered things usually get arranged like this:

1) Render solid objects into the depth and g-buffers

3) Render lights

4) Resolve the frame buffer to a texture that can be sampled called S

5) Render distortion effects by sampling S

6) Render transparent objects (usually via forward lighting)

As you mentioned this isn't really a generic solution, it just works okay most of the time.  A lot of games only take it this far and artifacts that result from having distortion effects in-front of transparent objects that at are in front of other distortion effects are either accepted as inevitable or dealt with by a graphics programmer conning a designer into tweaking the layout of the problematic scene.  Add depth-of-field effects to the mix and things can get ugly fast.

A generic solution is to lump distortion and transparency into the same object class and resolve the current frame buffer into a texture just before drawing each object.  This is extremely slow as you are resolving the scene to a texture many times per frame.

More sophisticated solutions group distorting and transparent objects into slabs or cascades and then only resolve the scene to a texture between rendering cascades.  When it comes to water...well, water is just given it's own cascade.  Admittedly this doesn't make much sense in the geometric sense (water is usually a giant plane that crosses all the cascades) it works well most of the time.

Various adaptations of depth peeling along with the compute shader analog that relies on MRTs and sorting data by depth can be leveraged as well to solve this problem.

Edited by nonoptimalrobot

Share on other sites

I don't see my engine typically having too many distortion effects going on all at once.  Mostly from fire or explosions.  It may work pretty well that way.

I'm still not really clear on what you do with water.  Water would be some plane, is the cascade you're talking about aligned to this plane?

Another thing I was thinking about is Additive blending.  Additive blending doesn't need to be depth sorted, but if it's mixed with alpha blended and distorted objects it probably does matter.

Also I was just doing a bit of reading on premultiplied alpha for the 100th time and it's finally starting to sink in.

http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx

http://blogs.msdn.com/b/shawnhar/archive/2009/11/07/premultiplied-alpha-and-image-composition.aspx

http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BPremultiplied%20alpha%5D%5D

This may be slightly off topic but is this basically saying that I don't need to worry about sorting when i'm doing premultiplied alpha?!?  How are more people not just using this then?

Edited by ill

Share on other sites

I'm still not really clear on what you do with water.  Water would be some plane, is the cascade you're talking about aligned to this plane?

The 'cascades' are generated by slicing up the view frustum with cutting planes that are perpendicular to the view direction.  Objects that use distortion are rendered in the furthest distortion cascade first and the nearest distortion cascade last.  The frame buffer is resolved into a texture between rendering each cascade.  Folding water into this is definitely add hoc.  The cascades get split into portions that are above the water plane and below the water plane.  Render the cascades below normally then resolve your frame buffer and move on to render the cascades that are above the water plane.  If the camera is below the water reverse the order.  Ugly huh?  Obviously none of this solves the problem it just mitigates artifacts.

This may be slightly off topic but is this basically saying that I don't need to worry about sorting when i'm doing premultiplied alpha?!?  How are more people not just using this then?

Yeah, those post (especially the second one) are misinformation or at least poorly presented information.  Alpha blending, whether using pre-multiplied color data or not is still multiplying the contents of your frame buffer by 1-alpha so the result is order dependent.  Consider rendering to the same pixel 3 different times using an alpha pre-multiplied texture in back to front order.  Each pass uses color and alpha values of (c0, a0), (c1, a1) and (c2, a2) respectively and 'color' is the initial value of the frame buffer.  Note I'm writing aX instead of 1-aX here because it requires less parenthesizes and is therefore easier to visually analyze, this doesn't invalidate the assertion.

Pass 1:              c0 + color * a0
Pass 2:        c1 + (c0 + color * a0) * a1
Pass 3:  c2 + (c1 + (c0 + color * a0) * a1) * a2 = result of in order rendering

Now lets reverse the order:

Pass 1:              c2 + color * a2
Pass 2:        c1 + (c2 + color * a2) * a1
Pass 3:  c0 + (c1 + (c2 + color * a2) * a1) * a0 = result of out of order rendering


Unfortunately:

c2 + (c1 + (c0 + color * a0) * a1) * a2 != c0 + (c1 + (c2 + color * a2) * a1) * a0


You still need to depth sort transparent objects regardless of how you choose to blend them...unless of course are just doing additive blending, then it doesn't matter.

• 18
• 18
• 11
• 21
• 16