Sign in to follow this  
punmaster

OpenGL Alpha blending and the depth test

Recommended Posts

I'm sure you've heard this a hundred times, and I apologize, but I still don't quite understand it. I am having the classic problem of not being able to see certain objects through certain other transparent / translucent objects. I know why this is happening: because the transparent objects are being drawn to soon, and things behind them are losing the depth test. I've heard the proposed solution of drawing all the opaque objects first, disabling writing to the depth buffer, and drawing the transparent objects in a certain optimal order. Still this is not practical for a number of reasons. First, the viewer is constantly moving, which means that the order objects must be optimally drawn in could be vastly different from frame to frame. Second, many of my objects (and textures) have both alpha channeled and opaque parts. And third, I have a lot of complex alpha channeled objects in my scene, and it's quite possible that part of an object would need to be drawn before part of another object but after a different part of the first object. Basically, the "optimal drawing order" would need to be more granular than a object-by-object approach, or possibly even a face-by-face approach, to get the desired effect. I know very little about OpenGL's built-in alpha handling features, but I was wondering if it would be possible to use the depth buffer to pre-calculate the distance from the viewer on a per-pixel basis, record the results, and then use that in place of the depth buffer for drawing in a second pass. I obviously don't know exactly what I'm talking about in terms of what I need to do here, but my ultimate goal is a system of rendering my alpha enabled models so that everything is visible through everything else, no matter where the viewer is located. Sorry again for not understanding this better, and thank you. EDIT: Slight modification to my original plan: What if you you made the videocard render every pixel (with depth data still intact), without discarding any at all. Than, once the entire frame was rendered, it could go back and do a depth test on the rendered buffer before copying it to the screen. It would take more memory, but it would make render order irrelevant. [Edited by - punmaster on June 10, 2009 1:29:08 PM]

Share this post


Link to post
Share on other sites
glEnable( GL_ALPHA_TEST );
glALphaTest( GL_GREATER 0.0 );

alternatively u can look into MSAA see sample to coverage

Share this post


Link to post
Share on other sites
What you're looking for is called "depth peeling".

BTW, I have no idea what zeds is going on about here.

Share this post


Link to post
Share on other sites
zedz: I am assuming you were suggesting I put the following code in my initialization routine:

glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);

I tried this and nothing changed. Could you please be more specific.

Sneftel: I did a quick google search of "depth peeling" and found relevant results, but I'm afraid I don't understand how to actually implement it in OpenGL. The closest I came was this "http://developer.nvidia.com/object/Interactive_Order_Transparency.html" document from nVidia, but I haven't had a chance to read the entire thing. Do you have any better suggestions on where I should begin? I have never used GLSL before, will that be a requirement? Thank you for your helpful advice.

Share this post


Link to post
Share on other sites
That paper describes what I'm talking about. GLSL is actually not a requirement -- the technique can be implemented totally with fixed functionality.

Share this post


Link to post
Share on other sites
After reading the nVidia article, I fully understand depth peeling in concept. Still, the implementation they described went pretty far over my head. I searched around a little, but didn't find anything too helpful. Can anyone here give me an overview and possibly some resources to go about implementing this. Also, is there any way to avoid needing a ton of rendering passes (I saw something about "double depth peeling" which also makes sense in theory, but the implementation details for this were even worse). Thank you.

Share this post


Link to post
Share on other sites
A few solutions, all with their respective drawbacks:

A) The common way: render all opaque objects, disable depth writes, and draw all transparent objects in back to front order. The transparent objects need to be sorted on some granularity, usually per-object. This can still create artifacts, because self intersecting or concave objects need smaller granularity sorting. But it's relatively cheap. This is the usual system as it is used by 99% of all games for translucent geometry.

B) Alpha tested geometry. Essentially binary transparency. Perfect choice for things like leaves or objects shaped by using binary opacity masks. Very cheap, and entirely unproblematic since sorting is not required for pixel perfect rendering. Modern hardware with coverage sample AA can very well anti-alias the hard edges of the alpha mask to produce nice fades, without requiring any kind of sorting. This will not work for semi-transparent or translucent objects though.

C) Depth peeling. Basically per-pixel sorting. Given enough layers, it will generate a pixel perfect solution for even complex self-intersecting transparent or translucent geometry. It's very performance heavy though, and (as you noticed) not trivial to implement and optimize. Possibilities of optimizing it are:

* Double depth peeling
* ZT-buffer (ShaderX5, ch 2.8). Currently relies on undefined behaviour, since it requires simultaneous read/write access to a render target. May be impossible on most mainstream hardware, but could become practical in the future.
* Stencil routing (ShaderX6, ch 3.5). Can render many layers simultaneously without separate passes. Very advanced. Requires D3D10 and advanced shader support.

D) Screendoor transparency. Easy technique, doesn't require sorting, but can generate weird artifacts. Usually doesn't look very good, unless you don't have a lot of overlapping transparent objects and you're on a very high screen resolution.

E) Super-sampling with randomly rotated stipple patterns. Essentially sub-texel screendoor transparency with an additional resolve pass. With high enough supersampling resolution, this technique can look very good and doesn't require sorting on overlapping geometry. It completely murders fillrate though, is very memory intensive, and requires special shaders.

F) FSAA coverage sample stipple masks. Similar to E, but using the onboard multisampling circuitry. This technique would be optimal, but unfortunately most (read: 99.9%) of all consumer level GPUs use screenspace aligned sub-texel masks, thus making the technique impossible. Next generation GPUs could solve this by providing user-offset subtexel masks.

There are some more obscure techniques for very specific application scenarios.

Share this post


Link to post
Share on other sites
sorry punmaster I misunderstood what u wanted (often with blocks of text I find it hard to read)

Share this post


Link to post
Share on other sites
zedz: After a little more reading, I have found that the code you provided earlier would have worked perfectly to solve my problem I were only dealing with purely transparent objects. Unfortunately, it will not help with objects that have fully linear alpha channels. Still, I appreciate the input.

Yann L: Thank you for the very informative post. This is some of the best information I have ever seen presented in one place on this subject. After reading through the options, it seems that depth peeling is still the best way to get what I am looking for. The question now really comes down to getting it working in my application. Fortunately, the models I am working with at this point are not very high-cost in any other category, and, ironically, make heavy use of alpha channels partially to make up for the fact that they are relatively plain in terms of other rendering "frills". For that reason, I think I can live with the unoptimized multi-pass system, at least for the moment.

I am more than willing to write what ever code is necessary to make this happen, and learn what I need to along the way. Unfortunately, this seems to be a fairly sparsely documented concept (that appears to be relatively old, but has never really taken off), and I really do not know where to begin. All of the information I have seen regarding its implementation in OpenGL is either too vague to follow, or does things I completely don't understand. If anyone can help me on my way to actually seeing this work, I would be very happy. Thank you, everyone, for your help so far.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this