Archived

This topic is now archived and is closed to further replies.

PyroMeistar

16-bit textures in OGL

Recommended Posts

I was testing a 16-bit texture loading function but as soon as I tried to use the created texture, the program crashed. I''m pretty sure I set up the texture correctly with glTexImage2D. Then someone told me OpenGL 1.1 is only compatible with 8, 24 & 32-bit textures, is that true? If it is, is there a way to use 16-bit textures without having to convert them to 8-bit ones? Thanks

Share this post


Link to post
Share on other sites
quote:

Then someone told me OpenGL 1.1 is only compatible with 8, 24 & 32-bit textures, is that true?


No. OGL 1.1 works fine with 16bit textures.

Trying to determine the cause of your crash without further information is a bit like stabbing in the dark... Can you post a code snippet of your texture setup function ?

Oh, and what 3D card + drivers + OS are you using ?

[edited by - Yann L on August 13, 2002 10:22:27 PM]

Share this post


Link to post
Share on other sites
ATI RAGE PRO with latest drivers, Win98

After loading binary data...
_Swap16 converts BGRA to RGBA (And it works... )

if (pTexture->dwBPP == 16) { _Swap16(pTexture->pData, imagesize); type[0] = GL_RGB5_A1; type[1] = GL_RGBA; }
if (pTexture->dwBPP == 24) { _Swap24(pTexture->pData, imagesize); type[0] = GL_RGB8; type[1] = GL_RGB; }
if (pTexture->dwBPP == 32) { _Swap32(pTexture->pData, imagesize); type[0] = GL_RGBA8; type[1] = GL_RGBA; }

glGenTextures(1, &pTexture->hTexID);

glBindTexture(GL_TEXTURE_2D, pTexture->hTexID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, type[0], pTexture->dwWidth, pTexture->dwHeight, 0, type[1], GL_UNSIGNED_BYTE, pTexture->pData);

Share this post


Link to post
Share on other sites
To you want to use 16-bit data in the server side or the client side ?
That is, do you want the GL to store 16bit textures internally, or do you want to send 16bit data through the pixels pointer in glTexImage ?

Share this post


Link to post
Share on other sites
The internal format is the format the texture has once uploaded to OpenGL. The external format is the format the data has you pass to OpenGL.

These two format does not have to be the same. For example, if you set the desktop color depth to 16 bit colors, you generaly get best performace on some handware with 16 bit textures. So if you pass a 32 bit texture to OpenGL, the driver may internally convert it to 16 bit before storing it internally. Same the other way around, if you pass a 16-bit texture to OpenGL, it may be converted into a 32 bit one if you have a 32 bit color depth on your desktop.

The third parameter you pass to glTexImage tells OpenGL what the internal format is. If you say GL_RGBA, you get four color components, thats all you say. The driver is free to choose how many bits to allocate per channel, cause you didn''t explicitly say that. You can say GL_RGBA8 instead, which mean explicitly 8 bits per channel. If you pass GL_RGBA8 when using a 16 bit color depth, you may have a performance decrease. Note however, the driver may still convert it into a 16 bit texture, even though you said GL_RGBA8.

You should always let the driver choose the internal pixel format, unless you know what you do.

Share this post


Link to post
Share on other sites