GL_ALPHA_TEST replacement in OpenGL ES 1.1 (no shaders)

Started by
1 comment, last by kloffy 13 years, 1 month ago
So it seems like the performance of GL_ALPHA_TEST on iOS is very poor. To quote Apple:

Graphics hardware often performs depth testing early in the graphics pipeline, before calculating the fragment’s color value. If your application uses an alpha test in OpenGL ES 1.1 or the discard instruction in an OpenGL ES 2.0 fragment shader, some hardware depth-buffer optimizations must be disabled. In particular, this may require a fragment’s color to be completely calculated only to be discarded because the fragment is not visible.An alternative to using alpha test or discard to kill pixels is to use alpha blending with alpha forced to zero. This effectively eliminates any contribution to the framebuffer color while retaining the Z-buffer optimizations. This does change the value stored in the depth buffer and so may require back-to-front sorting of the transparent primitives.

If you need to use alpha testing or a discard instruction, draw these objects separately in the scene after processing any primitives that do not require it. Place the discard instruction early in the fragment shader to avoid performing calculations whose results are unused.[/quote]
I am wondering what exactly they mean by "use alpha blending with alpha forced to zero". How can you accomplish this? Alternatively, is there any other way to omit/hide pixels based on their alpha value?
Advertisement
As far as I understand it, the MBX and SGX GPUs use depth test system that relies heavily on the alpha value of the incoming fragments (when blending is enabled). This means that early rejection of fragments via depth tests will be the most optimal when dealing with either 100% opaque or 100% transparent polygons. Taken from the PowerVR documentation:

The main difference resides in the fact that HSR (depth test) is performed up-front. All HSR results are kept in chip (tile) memory. Once visible pixels have been determined, texturing can take place, then alpha testing. The feedback loop to the depth test is there so that pixels succeeding the alpha test are flagged as transparent by the depth test module. Then blending with the current contents of the tile buffer is performed. Finally the pixel colour at the current screen location will be written (once) to the frame buffer. Localising data by tiling the screen enables the use of ultra-fast on-chip memory and thus to break the traditional memory bandwidth barrier.[/quote]

GL_ALPHA_TEST throws a spanner in the works, because it forces these optimisations off. I have encountered this problem a few weeks ago. The performance difference with GL_ALPHA_TEST on and off is massive.

In my application, I needed to test which objects in the scene were opaque, then use the test result for painting an outline on top of the occluded regions. However this was very slow. A significantly faster solution was to pre-render a mask texture representing the opacity of the entire scene and then using it via texture combiners for masking the aforementioned outline texture. For my application, creating a mask texture like this was not a problem, as my game uses top-down ortho view and the objects in question were predominantly static.
Latest project: Sideways Racing on the iPad

In my application, I needed to test which objects in the scene were opaque, then use the test result for painting an outline on top of the occluded regions. However this was very slow. A significantly faster solution was to pre-render a mask texture representing the opacity of the entire scene and then using it via texture combiners for masking the aforementioned outline texture. For my application, creating a mask texture like this was not a problem, as my game uses top-down ortho view and the objects in question were predominantly static.

Thank you for your post, interesting to hear some rationale for why alpha testing is so slow! The performance penalty for using it was huge in my application as well. I was also thinking about using texture combiners to work around this issue. Basically, I was planning to have only fully opaque or fully transparent pixels in the original texture . Additionally, I would have an "alpha mask" with the real alpha values. The original texture would be the "alpha-tested" version, so to speak. When real blending is needed, I would add the alpha values from the "alpha mask" using something like:

[font=sans-serif][size=2]glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_ADD);[/font]

This topic is closed to new replies.

Advertisement