OpenGL TexImage with GL_RGB vs. GL_RGBA

Started by
1 comment, last by m_shade 24 years, 2 months ago
I''m loading a texture image that does not require any alpha data. I notice a considerable difference in image quality in 16-bit mode when I only include pixel data for RGB, and leave out A, as opposed to adding the A to my pixel data and setting it to be 255 (full). I was just wondering if when using glTexImage2D, if when you specify RGB, it will split up the bits 5,6,5 for R,G, and B, and when you specify RGBA it will split up the bits 4,4,4,4 for R,G,B, and A. I at first thought it would always just fill in the Alpha if its not given, but this does not appear to be the case. I am giving the data in 24 or 32 bit quantities per pixel. I also thought this may be card/driver specific, I don''t know, I''ve never seen it specified. I''m using a geForce. The reason this came up is becausing I had been using my own crude TGA loading routine on the project I''d been working on, but decided to grab the LoadTGA function from the Q1 source. After adding it, I noticed the texture quality of my skybox really sucked, so I compared the two functions, and realized the Q1 source used RGBA and set the alpha to 255, while I had been using just RGB. I thought it was odd that this wasn''t accounted for in the Q1 source (even though TGA''s aren''t really used in Q1, some of the skybox functionality is still there).
Advertisement
You could give glTexImage2D a hint about what format to use. For example, if you want to hint the driver to use 16-bit textures, you could enter GL_RGB5 instead of GL_RGB for the ''InternalFormat'' (the third one) parameter of glTexImage2D. If you want to force 32-bit ARGB, enter GL_RGBA8.

Hope this helps,
DaBit.

Ah, yes that actually helps a lot, that''s exactly the info I was looking for. I had forgotten all about the fact that you can specify the resolution of the color components with the internalFormat parameter.

Thanks

This topic is closed to new replies.

Advertisement