Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualmyers

Posted 05 November 2013 - 05:37 AM

This paper on Dual Depth Peeling also presents an alternative approach to order independent transparency they refer to as "Weighted Average" (hidden away on page 8).  It sounds too good to be true, it's just one pass over the geometry, and then a full-screen blending pass.  The results are an approximation, but from the pictures in the paper, the results are indistinguishable from doing it the correct way.

 

It uses MRT with 16-bit 4 channel target to accumulate RGBA values, and a second target to count the layers of transparency at each pixel.  Then the fullscreen pass divides the accumulated RGBA by the layer count and blends the result to the main framebuffer.  Super simple.

 

I haven't tried it myself, but it sounds great for particles.  Anyone try it?

 

I thought it looked too good to be true at first as well. Unfortunately, trying it out confirmed this. The inaccuracies quickly become quite noticeable if there's a significant difference between the brightness of overlapping fragments. The main issue is that the transparent fragments don't look ordered at all, so a bright fragment in front of a dull one looks identical to a dull one in front of a bright one. In my view, this looks quite glaringly wrong.

 

I now use per-pixel linked lists to do OIT. It's obviously more expensive than the weighted average approach, but it produces 100% accurate results, and I think it's worth the tradeoff.

 

I still use weighted average as a fallback, though. It remains preferable to no OIT at all, and might be okay if you're only using it for fast-moving particles or something. I'd recommend giving it a shot, anyway, as it's pretty trivial to implement.


#1myers

Posted 05 November 2013 - 05:24 AM

This paper on Dual Depth Peeling also presents an alternative approach to order independent transparency they refer to as "Weighted Average" (hidden away on page 8).  It sounds too good to be true, it's just one pass over the geometry, and then a full-screen blending pass.  The results are an approximation, but from the pictures in the paper, the results are indistinguishable from doing it the correct way.

 

It uses MRT with 16-bit 4 channel target to accumulate RGBA values, and a second target to count the layers of transparency at each pixel.  Then the fullscreen pass divides the accumulated RGBA by the layer count and blends the result to the main framebuffer.  Super simple.

 

I haven't tried it myself, but it sounds great for particles.  Anyone try it?

 

I thought it looked to good to be true at first as well. Unfortunately, trying it out confirmed this. The inaccuracies quickly become quite noticeable if there's a significant difference between the brightness of overlapping fragments. The main issue is that the transparent fragments don't look ordered at all, so a bright fragment in front of a dull one looks identical to a dull one in front of a bright one. In my view, this looks quite glaringly wrong.

 

I now use per-pixel linked lists to do OIT. It's obviously more expensive than the weighted average approach, but it produces 100% accurate results, and I think it's worth the tradeoff.

 

I still use weighted average as a fallback, though. It remains preferable to no OIT at all, and might be okay if you're only using it for fast-moving particles or something. I'd recommend giving it a shot, anyway, as it's pretty trivial to implement.


PARTNERS