Sign in to follow this  
ehmdjii

OpenGL premultiplied alpha

Recommended Posts

hello, i hear a lot that a technique called "premultiplied alpha" can be used to get rid of the dark halos when using alpha-textures and the standard blend mode of: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); unfortunately i dont quite understand it and i also havent found an openGL implementation of it, so i would be glad if you could give me any pointers into premultiplied alpha. thanks a lot!

Share this post


Link to post
Share on other sites
Blending with GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA means that the resulting color would be
C' := CB * A + CF * ( 1 - A )
where C means a color vector, index B means the background, index F means the incoming fragment, and A means the normalized (i.e. A in [0,1]) alpha. This method works fine for all values being in the allowed ranges.

Now, you can pre-multiply the fragment with 1-A
CF' := CF * ( 1 - A )
and then using GL_SRC_ALPHA, GL_ONE to yield in
C' := CB * A + CF'
what is obviously the same as above.

IMO, if the texture used for the fragments is properly prepared, there is no problem with alpha blending at all. We are currently working on a sprite system for our game engine, and we have had the problem of white peaks on the border of sprite shapes in the case of extra alpha channels (not in the case of the standard alpha channel; we're speaking of PhotoShop here that allows several alpha channels). After looking at the process we saw that the peaks arised from how the artists have generated the alpha already in PhotoShop. They arised from the fact that PhotoShop has blended the image already to achieve antialiasing, and the border color (black) was different from the background color (white) when the additional alpha channel was generated, so that too much of the background was taken into account (notice that we don't want _any_ background when exporting the image). Dark peaks will occur in the inverse case. The problem disappeared as soon as the background color and border color were equal when the alpha was generated.

Notice that using PhotoShop's standard alpha channel only doesn't show this problem. (This is also sometimes called using pre-multiplied alpha, as our artists have stated.) This is because it solely works like the formula given above.

So, how is the process you use to generate the textures?

Share this post


Link to post
Share on other sites
thank you very much for your detailed explanaition.

how can i implement the formulas you gave using openGL?


for the textures i also use photoshop and save them as PNGs, which dont seem to have a "real" alpha channel. at least it doesnt show up in photoshop.

thanks!

Share this post


Link to post
Share on other sites
The formulas are simply achieved by using the blending factors I've written down. If you have a normal (i.e. not pre-multiplied image) then use
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and if you have a pre-multiplied image then use (as I've described above)
glBlendFunc(GL_SRC_ALPHA, GL_ONE);

AFAIK PNG can have a normal alpha channel. Unfortunately, ATM we cannot import PNG but PhotoShop directly in our engine, so I can't just do a quick test with PNG.

In our engine we currently do the following: We allocate an uint8_t array, load the red, green, blue, and alpha channels of PhotoShop as they are (okay, they are re-arranged since PhotoShop stores its images banked and not interleaved), push that data into a
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,data);
and use the aforementioned
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
with it. Until now we've no problems found with it (but perhaps we don't have looked at the right places yet ;)


EDIT: If of interest, here are the belonging code snippets. This routine prepares the texture:

void
GLRenderer::activateTexture(const ImageTexture& texture,uint32_t unit) {
::glActiveTexture(GL_TEXTURE0+unit);
// generating an OpenGL texture object if necessary ... else ...
const uint32_t textureID = texture.textureID();
if(_arrayOfTexObjects[textureID]==0) {
const GLuint width = texture.image()->width();
const GLuint height = texture.image()->height();
const void* data = texture.image()->backingPixelMap().buffer();
::glGenTextures(1,&_arrayOfTexObjects[textureID]);
::glPixelStorei(GL_UNPACK_ALIGNMENT,1);
::glBindTexture(GL_TEXTURE_2D,_arrayOfTexObjects[textureID]);
::glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
::glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
// determining mode
// TODO rudimentary implementation yet
GLenum mode = (*texture.image())[Image::Channel::ALPHA] ? GL_RGBA : GL_RGB;
// generating texture
::glTexImage2D(GL_TEXTURE_2D,0,mode,width,height,0,mode,GL_UNSIGNED_BYTE,data);
} else {
::glBindTexture(GL_TEXTURE_2D,_arrayOfTexObjects[textureID]);
}
// enabling texturing
::glEnable(GL_TEXTURE_2D);
}



And this routine renders the sprite onto a quad:

void
GLRenderer::renderRectangle(const Region2s& region,const ImageTexture& texture,const Region2s& imageRegion) {
activateTexture(texture);
::glColor4f(1.0f,1.0f,1.0f,1.0f);
float u0 = float(imageRegion.minimum0())/texture.image()->width();
float u1 = float(imageRegion.maximum0())/texture.image()->width();
float v0 = float(imageRegion.minimum1())/texture.image()->height();
float v1 = float(imageRegion.maximum1())/texture.image()->height();
if(texture.image()->rightToLeft()) {
float temp;
temp = u0, u0 = u1, u1 = temp;
}
if(!texture.image()->bottomToTop()) {
float temp;
temp = v0, v0 = v1, v1 = temp;
}
const bool withBlending = (*texture.image())[Image::Channel::ALPHA];
if(withBlending) {
::glEnable(GL_BLEND);
::glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
}
::glBegin(GL_QUADS);
::glTexCoord2f(u0,v0);
::glVertex2sv(region.minimum());
::glTexCoord2f(u0,v1);
::glVertex2s(region.minimum0(),region.maximum1());
::glTexCoord2f(u1,v1);
::glVertex2sv(region.maximum());
::glTexCoord2f(u1,v0);
::glVertex2s(region.maximum0(),region.minimum1());
::glEnd();
::glDisable(GL_TEXTURE_2D);
if(withBlending) {
::glDisable(GL_BLEND);
}
}

Share this post


Link to post
Share on other sites
thanks for your help!

i thought that when doing premultiplied alpha, you have to multiply the R, G and B channels with the alpha value to get the correct results when doing blending with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA).

is that correct?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628390
    • Total Posts
      2982410
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now