Texture Pixel Color "Bleeding" (Solved)

Started by
9 comments, last by Kalidor 18 years, 4 months ago
I'm using SDL+OpenGL (I'm pretty new at everything OpenGL) to load a 32-bit PNG image, and then I display it on the screen as a quad at a 100% of the image's original size. I'm having trouble getting rid of the color bleeding that occurs. I use glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); when loading the texture. Does anyone of you know where the error occurs, or what this problem is called? image:
on screen: notice how the spots of green being "washed" away, and the color mixing that goes on at the bottom of the top-left-most square. [Edited by - parklife on December 18, 2005 4:01:48 AM]
Advertisement
Is that green square a seperate texture? If so, are you building mipmaps? If you are you may not want to for that texture. Alternately, something like:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);


might help

The whole image is one texture, so clamping does not help. I use GL_CLAMP_TO_EDGE for GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T. I'm not building mipmaps (or is that automatic by default?).

Oh, and I might point out that the image and screenshot I posted are scaled to 300% in this post.
Weird. Is there a 1:1 correspondence to the size of the quad the texture is rendered on? Is your texture matrix stack clean?
do you have turned down the quality settings in your drivers settings?
I've had similar problems while having the settings in my nvidia driver set more to speed than quality
Try the glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); for GL_TEXTURE_MIN_FILTER as well?
Quote:Original post by philipptr
do you have turned down the quality settings in your drivers settings?
I've had similar problems while having the settings in my nvidia driver set more to speed than quality


Tynin: GL_TEXTURE_MIN_FILTER is already set to GL_NEAREST

philipptr: Indeed, that fixed the problem. Thank you!

Thank you all for your suggestions!
Quote:Original post by parklife
philipptr: Indeed, that fixed the problem. Thank you!
That probably means that in your glTexImage2D call you pass GL_RGB or GL_RGBA for the internalformat parameter. All that says is you want a texture with red, green, and blue components. It says nothing about the bit depth of those components. If you want a texture with a specific bit depth, use one of the specific internal format enums (ie: If you want an RGBA texture with 8 bits per channel, use GL_RGBA8).
That doesn't look like filtering at all.

Have you turned on texture compression? It looks a lot like the artifacts you'll get from S3TC compressed images. You always want to use uncompressed texture formats for pixel-specific art (i e, GL_RGB8 or GL_RGBA8 internal format when calling glTexImage2D()).
enum Bool { True, False, FileNotFound };
I did use GL_RGBA as internalformat, however, changing this to GL_RGBA8 didn't change the output. I now have:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, flipped->w, flipped->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, flipped->pixels);

where flipped is an SDL_Surface loaded from a 32-bit PNG. GL_RGB8 also did not make a difference, it only disabled the alpha channel.

This topic is closed to new replies.

Advertisement