Jump to content
  • Advertisement
Sign in to follow this  
cryon

16bpp on textures when expecting 32bpp

This topic is 4461 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi First of all, I wasn't sure if this thread belonged in the Alternative Game Libraries (since I load textures with glfw) or here, hope it's right :> So, I'm loading a texture with some smooth gradients but when i put it on a quad it doesn't look very smooth (see picture). My guesses is that it somehows loads in a 16bpp format. The strange part in this is that it looks perfectly fine on a friends computer. As you can see in the image, the colored triangle right next to the quad looks smooth as it should but the texture doesn't. Here's the code that I use to load a texture: Note that no filtering is used.
bool LoadTexture( const char * path )
{
    glGenTextures( 1, &global_textures[ texture_count ] );
    glBindTexture( GL_TEXTURE_2D, global_textures[ texture_count ] );
    texture_count++;

    if( !glfwLoadTexture2D( path, 0 ) ) { return false; }
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ); 
		
    return true;
}


(global_textures and texturecount is defined( and it's global just when i go through nehe's tutorials :) ) The app is initialized like this:
    glfwInit();

    if( !glfwOpenWindow( w, h, 0,0,0,0, 0,0, mode ) )
    {
        glfwTerminate();
        return false;
    }
	
    glfwSetWindowSizeCallback( ResizeCallback );
    glfwEnable( GLFW_STICKY_KEYS );
    glfwSwapInterval( 0 ); //No vsync yet
    glClearColor( 0.0f, 0.0f, 0.0f, 0.0f ); // 0.3f, 0.3f, 0.6f, 0.0f 
    glEnable( GL_TEXTURE_2D );	
    glShadeModel( GL_SMOOTH );				

    if( !LoadTexture( "fire256.tga" ) )
    {
        return false;
    }


So, you guys have any ideas on what it could be? [EDIT] I've tested with linear filtering too, and it looks the same.

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
What is your glTexImage2D call?

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
What is your glTexImage2D call?


Well, the function glfwLoadTexture2D is pretty much the same as glTexImage2D with the exception that it loads an targa-image (.tga) from the disk instead of loading image-data from the memory. It also sets the pixel format and stuff automatically.

Share this post


Link to post
Share on other sites
Quote:
Original post by cryon
Quote:
Original post by Anonymous Poster
What is your glTexImage2D call?


Well, the function glfwLoadTexture2D is pretty much the same as glTexImage2D with the exception that it loads an targa-image (.tga) from the disk instead of loading image-data from the memory. It also sets the pixel format and stuff automatically.

Well then you're going to have to write your own "LoadTexture2D" function, unfortunately. It's the "internalFormat" parameter to glTexImage2D that you can set to a sized format (eg. GL_RGBA8 with 32bit color, GL_RGBA4 with 16bit color, etc).

Share this post


Link to post
Share on other sites
Quote:
Original post by deavik
Well then you're going to have to write your own "LoadTexture2D" function, unfortunately. It's the "internalFormat" parameter to glTexImage2D that you can set to a sized format (eg. GL_RGBA8 with 32bit color, GL_RGBA4 with 16bit color, etc).


Ok, I will definitely try that tomorrow! Though I believe that glfw preserves the depth and byte-order from the original targa , and in this case it's 32bit color RGBA. And the strangest part is that it worked flawlessly on a friends computer.

I'll post the results tomorrw.

// cryon

Share this post


Link to post
Share on other sites
I don't know anything about glfwLoadTexture2D, but I know ATI drivers have a 'helpful' optimization that gives you a 16-bit texture when you ask for GL_RGBA. Make sure you explicitly ask for GL_RGBA8.

Share this post


Link to post
Share on other sites
Quote:
Original post by hh10k
I don't know anything about glfwLoadTexture2D, but I know ATI drivers have a 'helpful' optimization that gives you a 16-bit texture when you ask for GL_RGBA. Make sure you explicitly ask for GL_RGBA8.


ditto

Share this post


Link to post
Share on other sites
Quote:
Original post by hh10k
I don't know anything about glfwLoadTexture2D, but I know ATI drivers have a 'helpful' optimization that gives you a 16-bit texture when you ask for GL_RGBA. Make sure you explicitly ask for GL_RGBA8.


That did it! thank you very much deavik and hh10k. Yeah that sure was a "helpful" optimization :)

Share this post


Link to post
Share on other sites
In ATI's defense, what they are doing is infact perfectly fine.
The OpenGL Spec doesn't mandate bit depths on internal formats, as such when you request GL_RGBA the driver is free to do what it likes as long as the image data has 4 componets to it.

You can force it to look better by adjusting the texture quality slider in the control panel.

Intrestingly, I don't ever recall seeing this problem myself and I only request GL_RGBA for my image internal format... I'll have to check again later of course...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!