OpenGL and Sprites...

Started by
5 comments, last by MENTAL 17 years, 6 months ago
If have a problem with Transparency using Sprites in OpenGL I have an image with the background pink, or red. I want to display this image over another, but with the pink (or red) background transparent. I had run through NeHe's tutorial, and the forum, and the only thing that works I found is using two textures, one as a mask, another as the proper image. However the game uses lot's of sprites, and I wanted it to be able to run on as old computers as it is possible (and hopefully avoid masking like 4k of images :P) So I was looking for a solution that uses only one texture, and the fact, that the images background color is very diffrent form the object I want to display. Can anyone help me, or at least point to place with a solution?
_______________________________All your base are belong to us.
Advertisement
In the routine that draws your sprite switch the background color to the pink color of the sprite and set your blend function to glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) ....i think. Afterwards switch the background color back to black. That should get rid of the pink color in your sprites using only 1 texture.
www.lefthandinteractive.net
The technique you are trying to perform is called colour keying, which isn't supported natively by OpenGL. Instead, for each sprite you load in, you need to convert it to an RGBA image. Set the A (alpha, i.e. transparancy) component to 255 for every pixel besides those that are identical to the colour you are using to define the transparancy - for those pixels, set the alpha to 0. After that, upload your texture (remember to tell OpenGL it's RGBA and has 4 colour chanels).

Now, when initializing OpenGL, add the following function calls:

glEnable (GL_ALPHA_TEST);
glAlphaFunc (GL_GREATER, 0.0);

Now whenever you draw your sprites, any pixel with 0 alpha will be automatically discarded. You can also simulate the same thing through blending but this is a far simpler approch (and is likely to be quicker then blending on older cards where this technique was used a lot).

Hope it helps!
It's been a while, but what I used to do was load in the rgb texture into memory but create a GL_RGBA sprite texture for opengl. You would simply copy all pixels and set the alpha component to 255 for the pixels you want fully opague and set the alpha to zero for the fully transparent ones.

You would use glBlend(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) when drawing the sprite.
Quote:Original post by Anonymous Poster
You would use glBlend(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) when drawing the sprite.


What you had was pretty much what MENTAL said, however his version is likely to be quicker as alpha test is performed on the incoming fragment and is a simple compare and discard, however any blending requires a read-modify-write cycle which is murder on fillrate (less so now, certainly on older cards) and far more work in general.

In otherwords, the OP should do what MENTAL said [smile]
Just one thing. Can I set this alpha component in an algorythym? Or should I manualy do it in a graphical app?
Aside from that I was trying to load a 32-bit BMP file, as an RGBA. However I got a bizzare image looking like grid of some sort.

I'm using this to load the image:
glTexImage2D(GL_TEXTURE_2D, 0, 4, Texture[0]->sizeX, Texture[0]->sizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, Texture[0]->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

[Edited by - Yezu on October 14, 2006 1:37:51 PM]
_______________________________All your base are belong to us.
Here is an example:

// Change this to whatever your background colour is.GLubyte colour_key[3] = { 255, 255, 0 };GLuint LoadTexture (const String &filename){    Image orig_image;    if (!orig_image.Load (filename))        return 0;    GLubyte *keyed_pixels = new GLubyte [orig_image.width * orig_image.height * 4];    for (int row_index = 0; row_index < orig_image.height; ++row_index)    {        for (int column_index = 0; column_index < orig_image.width; ++column_index)        {            keyed_index = (row_index * orig_image.width * 4) + (column_index * 4)            orig_index = (row_index * orig_image.width * 3) + (column_index * 3)            GLubyte orig_red = orig_image.pixels[orig_index + 0];            GLubyte orig_green = orig_image.pixels[orig_index + 1];            GLubyte orig_blue = orig_image.pixels[orig_index + 2];            keyed_pixels[keyed_index + 0] = orig_red;            keyed_pixels[keyed_index + 1] = orig_green;            keyed_pixels[keyed_index + 2] = orig_blue;            if (orig_red == colour_key[0] && orig_green == colour_key[1] && orig_blue == colour_key[2])            {                keyed_pixels[keyed_index + 3] = 0;            }            else            {                keyed_pixels[keyed_index + 3] = 255;            }        }    }    GLuint tex_index;    glGenTextures (1, &tex_index);    glBindTexture (GL_TEXTURE_2D, tex_index);    glTexImage2D (GL_TEXTURE_2D, 0, 4, orig_image.width, orig_image.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, keyed_pixels);    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);    delete [] keyed_pixels;    orig_image.Unload ();    return tex_index;}


That *should* work (I might have got the indexes wrong, but it shows you what you should be aiming to achieve). Obviously the Image class is something you would have to provide (or modify the code so it works with your own image loader).

What it does is load in the image specified, and then creates a new set of pixels with 4 colour components (RGBA) instead of the bitmap's 3 (RGB). It then copies the pixels from the original image to the keyed image, and checks to see whether the 4th component (alpha) should be set to 0 (none) or 255 (full), depending on whether the current pixel colour matches the colour key.

Hope it helps!

This topic is closed to new replies.

Advertisement