Multiple screenspace distortion effects

Started by
6 comments, last by rick_appleton 19 years, 5 months ago
After reading this thread about translucent surfaces, I was thinking about multiple distortion effects. As an example I'll use the image shown there. Imagine there was a kind of heat-haze effect in front of the fogged glass. How would we render this? Using the same concept as mentioned there: 1 Render everything behind the fogged glass (use clipplane) to texture. 2 Blur texture and use it to render the glass (where to? ideally into the same texture). 3 Render everything in front of the glass but behind the heat haze (use clipplanes) to texture (can use same texture as resulting from 2). 4 Render heat haze using altered texture from 3 (this can into the framebuffer). 5 Render everything in front of the heathaze directly into the framebuffer. This seems to be quite a hassle to set up, and I'm not sure it's possible to read and render to the same texture at the same time (probably not) which means we need render to a new texture at stage 4. Does this seem right?
Advertisement
yes, that's about it..

The way to do this would be to have two textures (A & B) that you flip between everytime you do a screen-space distortion effect.

so you'd go like this:
1) scene -> A
2) A -> B
3) scene -> B
4) B -> A
5) scene -> A

This way you can have an arbitary number of distortions and it'll always work. Combine this with a system that figures out which distortions overlap and which can be done in the same flip and it's all good.. ^^
I gotta say, I wonder that it's not possible to render all your distortions - heat haze, refraction, etc - into a single buffer first, and then use that as a source for a single distortion pass on the image.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

It is possible to render all the distortion vectors onto a single texture on some hardware. Here's how:

  • Clear distortion map: if floating point and signed, to [0,0,0]; otherwise [0.5,0.5,0.5] (50% gray).
  • For each distorting object, render the distortion vectors into the distortion map, using additive blending. If floating-point, no shifting is necessary. Otherwise, compress vectors to unsigned range before adding, and use signed add blending. Note that the distortion objects should be rendered in back-to-front order.
  • The distortion map now contains accumulated per-pixel distortion of all distorting objects. Use this map when sampling the original color data.


The technique mainly relies on the assumptions that the distortion vectors can be visualized in the first place (ie. pixel shader support), that the vectors can be accumulated (fp blending required if fp distortion accumulation map is used), and dependent texture reading to actually use the calculated distortion in a meaningful way.

-Nik

Niko Suni

Quote:Original post by rick_appleton
After reading this thread about translucent surfaces, I was thinking about multiple distortion effects.

As an example I'll use the image shown there. Imagine there was a kind of heat-haze effect in front of the fogged glass. How would we render this?

Using the same concept as mentioned there:

1 Render everything behind the fogged glass (use clipplane) to texture.
2 Blur texture and use it to render the glass (where to? ideally into the same texture).
3 Render everything in front of the glass but behind the heat haze (use clipplanes) to texture (can use same texture as resulting from 2).
4 Render heat haze using altered texture from 3 (this can into the framebuffer).
5 Render everything in front of the heathaze directly into the framebuffer.

This seems to be quite a hassle to set up, and I'm not sure it's possible to read and render to the same texture at the same time (probably not) which means we need render to a new texture at stage 4.

Does this seem right?


I just replied to the thread you reference, but I'll post again here for completeness :) There's a presentation here that talks about different kinds of screen space techniques that can be achieved on today's GPUs. Part of this presentation talks specifically about heat/haze distortion:

http://www.ati.com/developer/gdce/Oat-ScenePostprocessing.pps

Good luck --Chris
Nik02 and superpig: something like that would be possible I guess, but then you'd have the problem that the haze might sample from geometry that is nearer than the haze effect.

The fogging should make you able to see slightly 'around' a corner, but if you only distort the final image, the things 'aroud' the corner are not visible, so never show up.
Then again, all graphics programming really is is a big assorted series of hacks. If the visual result is close enough to fool the eye, then I would consider the technique that produces that result as valid enough.

I fully understand your points though, and they could very well be implemented - but with some more work in both pre- and post-effect steps. [smile]

-Nik

Niko Suni

Entirely true.

This topic is closed to new replies.

Advertisement