I'm working a new rig with an ATI card after working on one with an nVidia card. On a recompile on my new rig and running it, I get black borders around my textures.
My original code that worked on nVidia cards:
glBindTexture( GL_TEXTURE_2D, sprites.get_texture() );
glBegin( GL_QUADS );
glTexCoord2f( 0.f, 0.f );
glVertex2f( -23.f, -23.f );
glTexCoord2f( sprites.width() / sprites.t_width(), 0.f );
glVertex2f( 23.f, -23.f );
glTexCoord2f( sprites.width() / sprites.t_width(), sprites.height() / sprites.t_height() );
glVertex2f( 23.f, 23.f );
glTexCoord2f( 0.f, sprites.height() / sprites.t_height() );
glVertex2f( -23.f, 23.f );
glEnd();
Now, I can get the black borders to disappear by shifting the texture coordinates half a texel:
glBindTexture( GL_TEXTURE_2D, sprites.get_texture() );
GLfloat s = ( 1.f / sprites.t_width() ) / 2.f;
GLfloat t = ( 1.f / sprites.t_height() ) / 2.f;
glBegin( GL_QUADS );
glTexCoord2f( 0.f + s, 0.f + t );
glVertex2f( -23.f, -23.f );
glTexCoord2f( sprites.width() / sprites.t_width() - s, 0.f + t );
glVertex2f( 23.f, -23.f );
glTexCoord2f( sprites.width() / sprites.t_width() - s, sprites.height() / sprites.t_height() - t );
glVertex2f( 23.f, 23.f );
glTexCoord2f( 0.f + s, sprites.height() / sprites.t_height() - t );
glVertex2f( -23.f, 23.f );
glEnd();
But is there a more elegant way to do this?