This topic is now archived and is closed to further replies.

16 bit truncation... at least thats what it seems

This topic is 6539 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

For some reason when I load 32 bit TGA''s and then make OpenGL textures out of them it looks like they are truncated to 16bit images. But since the application is 32 bit and i have the mag/min filters set to linear, it creates a really stepped gradient with slightly fuzzy edges. here is the code to define the Opengl texture glPixelStorei(GL_UNPACK_ALIGNMENT,1); glGenTextures(1, &texName[0]); glBindTexture(GL_TEXTURE_2D, texName[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256,256,0,GL_RGBA,GL_UNSIGNED_BYTE, initial); note: initial is an 256x256x4 array of Unsigned Bytes why does it look so bad?

Share this post

Link to post
Share on other sites