# OpenGL Alpha blending and the depth test

## Recommended Posts

I'm sure you've heard this a hundred times, and I apologize, but I still don't quite understand it. I am having the classic problem of not being able to see certain objects through certain other transparent / translucent objects. I know why this is happening: because the transparent objects are being drawn to soon, and things behind them are losing the depth test. I've heard the proposed solution of drawing all the opaque objects first, disabling writing to the depth buffer, and drawing the transparent objects in a certain optimal order. Still this is not practical for a number of reasons. First, the viewer is constantly moving, which means that the order objects must be optimally drawn in could be vastly different from frame to frame. Second, many of my objects (and textures) have both alpha channeled and opaque parts. And third, I have a lot of complex alpha channeled objects in my scene, and it's quite possible that part of an object would need to be drawn before part of another object but after a different part of the first object. Basically, the "optimal drawing order" would need to be more granular than a object-by-object approach, or possibly even a face-by-face approach, to get the desired effect. I know very little about OpenGL's built-in alpha handling features, but I was wondering if it would be possible to use the depth buffer to pre-calculate the distance from the viewer on a per-pixel basis, record the results, and then use that in place of the depth buffer for drawing in a second pass. I obviously don't know exactly what I'm talking about in terms of what I need to do here, but my ultimate goal is a system of rendering my alpha enabled models so that everything is visible through everything else, no matter where the viewer is located. Sorry again for not understanding this better, and thank you. EDIT: Slight modification to my original plan: What if you you made the videocard render every pixel (with depth data still intact), without discarding any at all. Than, once the entire frame was rendered, it could go back and do a depth test on the rendered buffer before copying it to the screen. It would take more memory, but it would make render order irrelevant. [Edited by - punmaster on June 10, 2009 1:29:08 PM]

##### Share on other sites
glEnable( GL_ALPHA_TEST );
glALphaTest( GL_GREATER 0.0 );

alternatively u can look into MSAA see sample to coverage

##### Share on other sites
What you're looking for is called "depth peeling".

BTW, I have no idea what zeds is going on about here.

##### Share on other sites
zedz: I am assuming you were suggesting I put the following code in my initialization routine:

glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);

I tried this and nothing changed. Could you please be more specific.

Sneftel: I did a quick google search of "depth peeling" and found relevant results, but I'm afraid I don't understand how to actually implement it in OpenGL. The closest I came was this "http://developer.nvidia.com/object/Interactive_Order_Transparency.html" document from nVidia, but I haven't had a chance to read the entire thing. Do you have any better suggestions on where I should begin? I have never used GLSL before, will that be a requirement? Thank you for your helpful advice.

##### Share on other sites
That paper describes what I'm talking about. GLSL is actually not a requirement -- the technique can be implemented totally with fixed functionality.

##### Share on other sites
After reading the nVidia article, I fully understand depth peeling in concept. Still, the implementation they described went pretty far over my head. I searched around a little, but didn't find anything too helpful. Can anyone here give me an overview and possibly some resources to go about implementing this. Also, is there any way to avoid needing a ton of rendering passes (I saw something about "double depth peeling" which also makes sense in theory, but the implementation details for this were even worse). Thank you.

##### Share on other sites
A few solutions, all with their respective drawbacks:

A) The common way: render all opaque objects, disable depth writes, and draw all transparent objects in back to front order. The transparent objects need to be sorted on some granularity, usually per-object. This can still create artifacts, because self intersecting or concave objects need smaller granularity sorting. But it's relatively cheap. This is the usual system as it is used by 99% of all games for translucent geometry.

B) Alpha tested geometry. Essentially binary transparency. Perfect choice for things like leaves or objects shaped by using binary opacity masks. Very cheap, and entirely unproblematic since sorting is not required for pixel perfect rendering. Modern hardware with coverage sample AA can very well anti-alias the hard edges of the alpha mask to produce nice fades, without requiring any kind of sorting. This will not work for semi-transparent or translucent objects though.

C) Depth peeling. Basically per-pixel sorting. Given enough layers, it will generate a pixel perfect solution for even complex self-intersecting transparent or translucent geometry. It's very performance heavy though, and (as you noticed) not trivial to implement and optimize. Possibilities of optimizing it are:

* Double depth peeling
* ZT-buffer (ShaderX5, ch 2.8). Currently relies on undefined behaviour, since it requires simultaneous read/write access to a render target. May be impossible on most mainstream hardware, but could become practical in the future.

D) Screendoor transparency. Easy technique, doesn't require sorting, but can generate weird artifacts. Usually doesn't look very good, unless you don't have a lot of overlapping transparent objects and you're on a very high screen resolution.

E) Super-sampling with randomly rotated stipple patterns. Essentially sub-texel screendoor transparency with an additional resolve pass. With high enough supersampling resolution, this technique can look very good and doesn't require sorting on overlapping geometry. It completely murders fillrate though, is very memory intensive, and requires special shaders.

F) FSAA coverage sample stipple masks. Similar to E, but using the onboard multisampling circuitry. This technique would be optimal, but unfortunately most (read: 99.9%) of all consumer level GPUs use screenspace aligned sub-texel masks, thus making the technique impossible. Next generation GPUs could solve this by providing user-offset subtexel masks.

There are some more obscure techniques for very specific application scenarios.

##### Share on other sites
sorry punmaster I misunderstood what u wanted (often with blocks of text I find it hard to read)

##### Share on other sites
zedz: After a little more reading, I have found that the code you provided earlier would have worked perfectly to solve my problem I were only dealing with purely transparent objects. Unfortunately, it will not help with objects that have fully linear alpha channels. Still, I appreciate the input.

Yann L: Thank you for the very informative post. This is some of the best information I have ever seen presented in one place on this subject. After reading through the options, it seems that depth peeling is still the best way to get what I am looking for. The question now really comes down to getting it working in my application. Fortunately, the models I am working with at this point are not very high-cost in any other category, and, ironically, make heavy use of alpha channels partially to make up for the fact that they are relatively plain in terms of other rendering "frills". For that reason, I think I can live with the unoptimized multi-pass system, at least for the moment.

I am more than willing to write what ever code is necessary to make this happen, and learn what I need to along the way. Unfortunately, this seems to be a fairly sparsely documented concept (that appears to be relatively old, but has never really taken off), and I really do not know where to begin. All of the information I have seen regarding its implementation in OpenGL is either too vague to follow, or does things I completely don't understand. If anyone can help me on my way to actually seeing this work, I would be very happy. Thank you, everyone, for your help so far.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627694
• Total Posts
2978673
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 19
• 14
• 12
• 10
• 12