Followers 0

# OpenGL Alpha blending with a color that has no RGB components

## 14 posts in this topic

I'm curious what the RGB components are of the alpha part of a PNG texture when it comes to alpha blending.

Given a PNG texture that is completely transparent with no color parts acting as the destination, and a red color {1.0,0.0,0.0,1.0} acting as the source in the glBlendFunc(GL_SRC_ALPHA, GL_ONE) equation:

Final color = (source color*source factor)+(destination color*destination factor)

= ({1.0,0.0,0.0,1.0}*1.0) + ({?,?,?,0.0}*{1.0,1.0,1.0,1.0})

I have no idea what OpenGL uses as the RGB component for a fully transparent texture in order to complete the equation.

0

##### Share on other sites

If you have an RGBA image, then every pixel have a defined RGB value. You cannot have an image where some pixels don't have certain channels at all. At the very least, your PNG loader will fill those with some default value or something if the PNG doesn't have any color information, and that's what OpenGL will see.

Edited by Brother Bob
1

##### Share on other sites

Yeah, i'm just wondering if anyone knows what those default values are for image loaders like SOIL and DevIL.

0

##### Share on other sites
[edit #2] Apparently I should be ranting at PNG exporters, rather than the PNG format itself! Thanks for the correction [/edit]

PNG is stupid in that it's authors made the assumption that the RGB values of transparent pixels aren't required.
If an artist tries to have pixels that have a particular colour value, but a zero alpha value, PNG jumps in and turns their colours into garbage.
AFAIK, the RGB values of invisible pixels in a PNG file are *undefined*, they're just garbage.

If you need this feature, which games very often do, then PNG isn't a suitable file format...
I personally deal with this problem by using a pair of two non-transparent files: one contains the RGB values, and the other contains the alpha values in the R channel... :-(

 if you don't need artist-authored values, and are ok with just some constant colour, then you can do this yourself after loading the PNG and before passing the pixel data to GL. Just loop over each pixel, check if alpha is zero, and if so, set RGB to your constant. Edited by Hodgman
1

##### Share on other sites

I'm pretty sure PNG "supports" having a proper RGB value. Obviously - there is still data there, regardless of whether it's "garbage" or not.

What determines whether it IS "garbage", is the editing application you use.

For example, I just tested with GIMP and it preserves the RGB just fine when you turn something into alpha.

2

##### Share on other sites

You can make sure all pixels in a texture have a valid RGB value by setting the RGB value of zero-alpha pixels to the RGB value of the nearest pixel that has a non-zero alpha value.

One simple way is to do it during mipmap creation. At the pixel position, sample downwards to the lower detail maps until you find a sample that has some alpha value. Use the RGB value of that sample while leaving the alpha at zero.

0

##### Share on other sites

PNG is stupid in that it's authors made the assumption that the RGB values of transparent pixels aren't required.
If an artist tries to have pixels that have a particular colour value, but a zero alpha value, PNG jumps in and turns their colours into garbage.
AFAIK, the RGB values of invisible pixels in a PNG file are *undefined*, they're just garbage.

If you need this feature, which games very often do, then PNG isn't a suitable file format...
I personally deal with this problem by using a pair of two non-transparent files: one contains the RGB values, and the other contains the alpha values in the R channel... :-(

 if you don't need artist-authored values, and are ok with just some constant colour, then you can do this yourself after loading the PNG and before passing the pixel data to GL. Just loop over each pixel, check if alpha is zero, and if so, set RGB to your constant.

For some time, I also blamed the .png file format for this problem, but apparently it's not a deficiency of .png, but a deficiency of Photoshop's png exporter.

1

##### Share on other sites

PNG is stupid in that it's authors made the assumption that the RGB values of transparent pixels aren't required.
If an artist tries to have pixels that have a particular colour value, but a zero alpha value, PNG jumps in and turns their colours into garbage.
AFAIK, the RGB values of invisible pixels in a PNG file are *undefined*, they're just garbage.

If you need this feature, which games very often do, then PNG isn't a suitable file format...
I personally deal with this problem by using a pair of two non-transparent files: one contains the RGB values, and the other contains the alpha values in the R channel... :-(

 if you don't need artist-authored values, and are ok with just some constant colour, then you can do this yourself after loading the PNG and before passing the pixel data to GL. Just loop over each pixel, check if alpha is zero, and if so, set RGB to your constant.

For some time, I also blamed the .png file format for this problem, but apparently it's not a deficiency of .png, but a deficiency of Photoshop's png exporter.

yeah, there is nothing in PNG itself to actually cause such an issue.

basically, it just stores raw RGBA data with some predictive filtering and Deflate compression, which works the same regardless of whether or not the pixel is transparent.

it is then up to the exporting application to determine what to save in these pixels.

some programs will just save whatever would have been present in the pixel if the pixel were not transparent.

some others will simply just clamp transparent areas to black, or maybe flood-fill with some other color.

"garbage" could be one of several varieties depending on the exporter:

is could just stupidly leak uninitialized memory values, which would be bad (not good for compression, ...);

or, it could be the encoder being clever, and forcing all prediction deltas to 0 in an attempt to get better compression (this being most likely to cause sort of a smeared-banding or rainbow-like patterns in transparent areas).

I don't personally have any experience with Photoshop (mostly just with GIMP or Paint.NET), and don't really know what it does in this case.

1

##### Share on other sites

RGBA values hold the RGB_ part of the value the same when you change the ___A channel. This is actually useful in an image editor, where you can adjust the transparency of certain pixels independently of you adjusting the color. It's also useful in games, where you might use all four RGBA channels to carry data other than image data.

For the PNG file format (or for OpenGL) to erase the RGB channels if the A channel is 0 would be terrible, at least to me.

If you need your RGB to be (0,0,0) if your A channel is (0), then run your images through a tool to pre-process them for your game.

1

##### Share on other sites

Thanks for your very knowledgeable opinions, guys.

I believe that my opengl setup(SOIL+GIMP as PNG exporter) produces {1,1,1} RGB default values for color values that are unspecified.

For anyone interested in how i arrived at {1,1,1}, you can do you're own test if you're using a different texture loader and/or exporter than mine by doing the following:

Use the following blending equation,

glBlendFunc(GL_DST_ALPHA, GL_SRC_ALPHA);

with an opaque source color(a color with alpha=1, ie:{0,0,1,1}) and whatever transparent texture destination for testing.

Which will simply eliminate the srccolor and leave us with the unaltered destcolor,

Final color = (srccolor*srcfactor)+(destcolor*destfactor)

= (srccolor*Da)+(destcolor*Sa)

= ({Sr, Sg, Sb, 1}*0)+(destcolor*1)

=(0)+(destcolor)

=destcolor

My opengl render left me with a white color.

0

##### Share on other sites

It makes perfect sense for a PNG encoder to set the RGB value to zero when the alpha value is zero because it allows it to compress the image better without changing the result. How were they supposed to know you would be using a RGBA image for something other than RGBA.

0

##### Share on other sites

Rather than thinking: "How were they supposed to know you would be using a RGBA image for something other than RGBA"

I think: "How dare they assume that my alpha value represents transparency!" - There are a million things you might want to use an alpha channel for.

Sure, have a tick box in the .png encoder settings to zero out the RGB for transparent texels, but don't turn it on by default, and don't do destructive actions without consent.

0

##### Share on other sites
Given a PNG texture that is completely transparent with no color parts acting as the destination, and a red color {1.0,0.0,0.0,1.0} acting as the source in the glBlendFunc(GL_SRC_ALPHA, GL_ONE) equation:

Final color = (source color*source factor)+(destination color*destination factor)

= ({1.0,0.0,0.0,1.0}*1.0) + ({?,?,?,0.0}*{1.0,1.0,1.0,1.0})

I have no idea what OpenGL uses as the RGB component for a fully transparent texture in order to complete the equation.

As already mentioned by others, if a pixel has an alpha component it necessarily has RGB components. But the error runs deeper: PNG compositing makes sense only with a fully opaque background, not with a fully transparent background. The bKGD chunk provides a reference colour that a transparent image is going to be composited over if not composited over some other opaque image.
This is completely different from pretending the transparent pixels are opaque: the red paint in the original question would be composited over the bKGD colour (unaffected by a whole image of fully transparent pixels), not over garbage RGB data that is harmless by design because it can only be non-drawn with zero opacity, never "resurrected" by pretending it's not transparent.

The PNG specification is unambiguous:

10.7 Background color

The background color given by bKGD will typically be used to fill unused screen space around the image, as
well as any transparent pixels within the image. (Thus, bKGD is valid and useful even when the image does
not use transparency.) If no bKGD chunk is present, the viewer will need to make its own decision about a
suitable background color.

Viewers that have a specific background against which to present the image (such as Web browsers) should
ignore the bKGD chunk, in effect overriding bKGD with their preferred background color or background
image.

The background color given by bKGD is not to be considered transparent, even if it happens to match the
color given by tRNS (or, in the case of an indexed-color image, refers to a palette index that is marked as
transparent by tRNS). Otherwise one would have to imagine something “behind the background” to composite
against. The background color is either used as background or ignored; it is not an intermediate layer
between the PNG image and some other background.

Indeed, it will be common that bKGDand tRNS specify the same color, since then a decoder that does not implement
transparency processing will give the intended display, at least when no partially-transparent pixels
are present.

In practical terms, what's wrong is putting meaningful RGB data but alpha=0 in all pixels of the original image: it should have either fully opaque alpha (255 or 65535 depending on bit depth) or RGB format (Color Type 2 rather than 6).

0

##### Share on other sites

Why use photoshop? It costs money, is bloated and is it really required these days?

GIMP 2 has alot of features too, even if its shit by my standards. At least its PNG exporter works.

0

## Create an account

Register a new account

Followers 0

• ### Similar Content

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

• 9
• 17
• 10
• 28
• 14