Jump to content

  • Log In with Google      Sign In   
  • Create Account

16 bit truncation... at least thats what it seems


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
No replies to this topic

#1 menasius   Members   -  Reputation: 122

Like
Likes
Like

Posted 22 February 2000 - 03:59 PM

For some reason when I load 32 bit TGA''s and then make OpenGL textures out of them it looks like they are truncated to 16bit images. But since the application is 32 bit and i have the mag/min filters set to linear, it creates a really stepped gradient with slightly fuzzy edges. here is the code to define the Opengl texture glPixelStorei(GL_UNPACK_ALIGNMENT,1); glGenTextures(1, &texName[0]); glBindTexture(GL_TEXTURE_2D, texName[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256,256,0,GL_RGBA,GL_UNSIGNED_BYTE, initial); note: initial is an 256x256x4 array of Unsigned Bytes why does it look so bad?

Sponsor:



Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS