OIT (Order Independent Transparency)

Started by
5 comments, last by vlj 9 years, 6 months ago

Hi all,

When you sort transparent objects back-to-front it's not correct because you sort one distance but one object is large and demand a dynamic array to use a sort algorithm.

The modern research give solution to that, actually two solutions is there Intel and Morgan+Louis solution.

Here link to both methods :

https://software.intel.com/en-us/blogs/2013/07/18/order-independent-transparency-approximation-with-pixel-synchronization

http://jcgt.org/published/0002/02/09/

Weighted Blended Order-Independent Transparency looks to give more perfs but is it good to use it or the intel solution still the best ?

Thanks for the help

Advertisement

The Intel solution is based on a quite recent and, as far as I know, Intel-only extension. The Weighted Blended OIT is more general, but it is an approximation. There are also other methods (A-buffers for example). I don't think this is a solved problem and in many cases sorting is still probably the way to go. The best method depends on what you are trying to do and the scene you are trying to display.

Maybe it's already sufficient for your application to sort the individual triangles back-to-front.

You can store the vertices statically on the gpu and issue a sorted list of indices in every frame,

for example. It's not too bad.

If you really need perfect transparency you may consider depth peeling.

It's a multipass technique and runs on older hardware.

The cheapest trick is to draw in any order and tell everyone it's correct.

Most people don't notice alpha-blending errors anyways :)

You can sort by object and then also use Weighted Blended Order-Independent Transparency.

There's no real "best" here. Just like any other engineering problem there's different solutions, with different trade-offs. The Intel technique has good results, but requires a recent Intel GPU which usually makes it a deal-breaker. Blended OIT can work on almost any hardware, but can produce sub-par results for certain scenes and generally relies on per-scene tweaking of the blending weights. What's better for you will depend on what platform and hardware you're targeting, as well as the kind of content you'll have in your game or app.

I've played with the Weighted Blended OIT method. There is a lot to like about it:

  • Simple to implement
  • Performance is very good
  • Combines nicely with offscreen particles (where particles are rendered into a downsampled offscreen buffer)
  • Although it is an approximation, it actually fixes the visual "popping" that occurs with particle sorting

However, one big drawback with it is that it doesn't support emissive transparency. Meaning effects like flames and such, done with additive blending.

At first glance it looks like it should work because the method works with RGB in premultiplied form, so you would think that you could just pass in an alpha of zero for emissive and it might just work. Unfortunately not. The weight function is scaled by alpha, so when alpha is zero, the weight also goes to zero and the effect disappears.

The Intel solution can give exact results if you accept the performance hit (in this case it's a classic linked list method). The quality trade-off come from removing some layer while emulating the light transmitance with all layers.

It can be simplified a lot using Pixel sync on Haswell through GL_INTEL_fragment_shader_ordering extension on OpenGL (don't know for DX11 but Grid 2 uses it).

GL_INTEL_fragment_shader_ordering basically allows to update alpha value on the fly because you can ensure that 2 fragments shader invocation won't access the same memory location at the same time.

GL_INTEL_fragment_shader_ordering is also available on AMD GCN with the latest driver.

This topic is closed to new replies.

Advertisement