A phenomenological scattering model

Started by
10 comments, last by Hodgman 8 years, 1 month ago

[context]
A Phenomenological Scattering Model for Order-Independent Transparency
Translucent objects such as fog, smoke, glass, ice, and liquids are pervasive in cinematic environments because they frame scenes in depth and create visually compelling shots. Unfortunately, they are hard to simulate in real-time and have thus previously been rendered poorly compared to opaque surfaces in games.

This paper introduces the first model for a real-time rasterization algorithm that can simultaneously approximate the following transparency phenomena: wavelength-varying ("colored") transmission, translucent colored shadows, caustics, partial coverage, diffusion, and refraction. All render efficiently on modern GPUs by using order-independent draw calls and low bandwidth. We include source code for the transparency and resolve shaders.

[Split off from another thread]
This is completely off topic, but as for sorting and intersections, I can't wait to try out McGuire's phenomenological scattering model that was also published recently. It seems like a complete hack that should never work, but the presented results do look very robust... TL;DR is that you don't sort at all, use a magic fixed function blend mode into a special MRT a-buffer, then at the end average the results into something that magically looks good and composite over the opaques.

[edit]
So far in my game, I've been using an improved version of stippled deferred translucency, which supports the same phenomena (except the shadow-map based parts of the above)... but the analysis/pattern-selection and compositing passes are kinda costly, it relies on deferred (and I'd like to move to Forward+), it only works with 4 layers, and getting it to work anti-aliasing is really painful...

On the other hand, "a PSM for OIT" is really simple (simpler than should be possible!) and works with Forward+ fine, so I'm intrigued. I remember seeing earlier versions of this work in previous years, but the results weren't that convincing so I dismissed it out of hand... but this year's publication shows fairly artifact free results.

Advertisement


I can't wait to try out McGuire's phenomenological scattering model that was also published recently. It seems like a complete hack that should never work, but the presented results do look very robust... TL;DR is that you don't sort at all, use a magic fixed function blend mode into a special MRT a-buffer, then at the end average the results into something that magically looks good and composite over the opaques.

The main issue that I ran into with blended OIT is that the weighting takes away bits that you need for representing a wide range of intensity values for the actual computed radiance. So if your max intensity * max blend weight > FP16Max, you end up with overflow which is difficult (or maybe impossible) to recover from.

The main issue that I ran into with blended OIT is that the weighting takes away bits that you need for representing a wide range of intensity values for the actual computed radiance. So if your max intensity * max blend weight > FP16Max, you end up with overflow which is difficult (or maybe impossible) to recover from.

If that's the main issue, that's pretty good -- I was expecting horrible ordering artifacts in certain situations :lol:
To get around that, could you divide all your intensity values by max-blend-weight before returning from your shaders, to give you that headroom (and then have a new problem of potential colour banding)? I guess the other work-around would be to go to full FP32...


To get around that, could you divide all your intensity values by max-blend-weight before returning from your shaders, to give you that headroom (and then have a new problem of potential colour banding)? I guess the other work-around would be to go to full FP32...

Yeah, that's pretty much what I was getting at: you have trade in some precision for the weighting, which is going to be a problem if you're already at the limits of FP16 due to your lighting environment. In practice you actually need more headroom than just MaxWeight * MaxIntensity, since you need to account for summing in multiple layers of overdraw (basically 1 bit per layer).

I can't really comment on ordering issues, since I never implemented it in a real game scenario. But if it requires picking weighting values on a per-scene basis, then I think it's a no-go for me.

What's stopping us from implementing it now? Doesn't the paper contain all the information we'd need?

What's stopping us from implementing it now? Doesn't the paper contain all the information we'd need?

What's stopping me is that I've got too much other work to do right now :lol:

I gave it a whack, just to see. The method is so simple, it's easy to integrate.

Comparison.jpg

The "DepthWeighted" image is using the "A phenomenological scattering model" maths. I'm just using the same weighting algorithm that was presented in the paper. And just reflections here -- no transmission/refraction.
There are some more screenshots here (with some trees and grass and things).

It seems to work best when there are few layers. For certain types of geometry, it might be ok. But some cases can turn to mush.

Assuming lighting method similar to the paper, I didn't notice weighting hurting precision too much... If you have a lot of layers, you're going to get mush, anyway.

The worst case for weighting issues might be distant geometry with few layers (given that this will be multiplied by a small number, and divided again by that number). Running the full lighting algorithm will be expensive, anyway -- so maybe it would be best to run a simplified lighting model.

It may work best for materials like the glass demo scene from the paper. That is dominated by the refractions (which aren't effected by the weighting or sorting artifacts).

Does any of those Transparency methods(stocastic, weighted) work on Dx9?

Stochastic won't work on Dx9 because is requires special MSAA behaviour.
The referenced sorting method is also way outside of DX9's range. It needs unordered access.

The depth weighted implementation should be ok on dx9, though... It just needs 2 MRTs and independent blend.

I gave it a whack, just to see. The method is so simple, it's easy to integrate.



The "DepthWeighted" image is using the "A phenomenological scattering model" maths. I'm just using the same weighting algorithm that was presented in the paper. And just reflections here -- no transmission/refraction.
There are some more screenshots here (with some trees and grass and things).

It seems to work best when there are few layers. For certain types of geometry, it might be ok. But some cases can turn to mush.

Assuming lighting method similar to the paper, I didn't notice weighting hurting precision too much... If you have a lot of layers, you're going to get mush, anyway.

The worst case for weighting issues might be distant geometry with few layers (given that this will be multiplied by a small number, and divided again by that number). Running the full lighting algorithm will be expensive, anyway -- so maybe it would be best to run a simplified lighting model.

It may work best for materials like the glass demo scene from the paper. That is dominated by the refractions (which aren't effected by the weighting or sorting artifacts).

What about a more interesting example with different colors overlapping? Also, can you provide shader code? I have a test program just begging me to implement this algorithm into it. xd

Oh, missed the link. Well... That's really not very impressive sadly... It seems like an improvement over WBOIT, but not by much...

This topic is closed to new replies.

Advertisement