• Advertisement
Sign in to follow this  

A phenomenological scattering model

This topic is 749 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement


I can't wait to try out McGuire's phenomenological scattering model that was also published recently. It seems like a complete hack that should never work, but the presented results do look very robust... TL;DR is that you don't sort at all, use a magic fixed function blend mode into a special MRT a-buffer, then at the end average the results into something that magically looks good and composite over the opaques.

 

The main issue that I ran into with blended OIT is that the weighting takes away bits that you need for representing a wide range of intensity values for the actual computed radiance. So if your max intensity * max blend weight > FP16Max, you end up with overflow which is difficult (or maybe impossible) to recover from. 

Share this post


Link to post
Share on other sites

The main issue that I ran into with blended OIT is that the weighting takes away bits that you need for representing a wide range of intensity values for the actual computed radiance. So if your max intensity * max blend weight > FP16Max, you end up with overflow which is difficult (or maybe impossible) to recover from.

If that's the main issue, that's pretty good -- I was expecting horrible ordering artifacts in certain situations :lol:
To get around that, could you divide all your intensity values by max-blend-weight before returning from your shaders, to give you that headroom (and then have a new problem of potential colour banding)? I guess the other work-around would be to go to full FP32...

Share this post


Link to post
Share on other sites

To get around that, could you divide all your intensity values by max-blend-weight before returning from your shaders, to give you that headroom (and then have a new problem of potential colour banding)? I guess the other work-around would be to go to full FP32...

 

Yeah, that's pretty much what I was getting at: you have trade in some precision for the weighting, which is going to be a problem if you're already at the limits of FP16 due to your lighting environment. In practice you actually need more headroom than just MaxWeight * MaxIntensity, since you need to account for summing in multiple layers of overdraw (basically 1 bit per layer).

 

I can't really comment on ordering issues, since I never implemented it in a real game scenario. But if it requires picking weighting values on a per-scene basis, then I think it's a no-go for me.

Edited by MJP

Share this post


Link to post
Share on other sites

What's stopping us from implementing it now? Doesn't the paper contain all the information we'd need?

What's stopping me is that I've got too much other work to do right now :lol:

Share this post


Link to post
Share on other sites
Stochastic won't work on Dx9 because is requires special MSAA behaviour.
The referenced sorting method is also way outside of DX9's range. It needs unordered access.

The depth weighted implementation should be ok on dx9, though... It just needs 2 MRTs and independent blend.

Share this post


Link to post
Share on other sites

I gave it a whack, just to see. The method is so simple, it's easy to integrate.
 


The "DepthWeighted" image is using the "A phenomenological scattering model" maths. I'm just using the same weighting algorithm that was presented in the paper. And just reflections here -- no transmission/refraction.
There are some more screenshots here (with some trees and grass and things).
 
It seems to work best when there are few layers. For certain types of geometry, it might be ok. But some cases can turn to mush.

Assuming lighting method similar to the paper, I didn't notice weighting hurting precision too much... If you have a lot of layers, you're going to get mush, anyway.

The worst case for weighting issues might be distant geometry with few layers (given that this will be multiplied by a small number, and divided again by that number). Running the full lighting algorithm will be expensive, anyway -- so maybe it would be best to run a simplified lighting model.

It may work best for materials like the glass demo scene from the paper. That is dominated by the refractions (which aren't effected by the weighting or sorting artifacts).

What about a more interesting example with different colors overlapping? Also, can you provide shader code? I have a test program just begging me to implement this algorithm into it. xd

 

Oh, missed the link. Well... That's really not very impressive sadly... It seems like an improvement over WBOIT, but not by much...

Edited by theagentd

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement