Jump to content
  • Advertisement
Sign in to follow this  

OpenGL ATI texture lookup precision error

This topic is 3911 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I've got a really tiny application in which I create a 3D-texture and apply it to a quad. The texture is 1x1x5 voxels big and contains the values 0.1 0.1 0.1 0.5 1.0 I set the x and y components of the plane texture coordinates to be on the interval 0..1 and the z-coordinate to a constant value. I apply the texture using a nearest-neightbour lookup and for different z-coordinates this is the result: Nvidia (GF 8800): texcoord 0.79999 returns 0.5 texcoord 0.8 returns 0.5 texcoord 0.80001 returns 1.0 ATI (HD2900): texcoord 0.7984 returns 0.5 texcoord 0.7985 returns 1.0 As you can see, the Nvidia card returns the values you would expect and the ATI card is really wrong. I've tried to use a 2D texture as well, same result there. I've got the latest drivers and the application is really basic (download it here if you'd like to try it: http://www.mediafire.com/?82h212tx2et The z-coordinate is changed using keys 1 through 5, all values except 5 should give a grey quad) Has anyone else but me experienced anything similar? Is this an ATI OpenGL implementation bug or what?

Share this post

Link to post
Share on other sites
To clarify things, I'm doing really simple stuff...

glGenTextures( 1, &texture_ );
glBindTexture( GL_TEXTURE_3D, texture_ );

float* data = new float[5];
data[0] = 0.1f; data[1] = 0.1f;
data[2] = 0.1f; data[3] = 0.5f;
data[4] = 1.0f;

glTexImage2D( GL_TEXTURE_3D, 0, GL_LUMINANCE16, 1, 1, 5, 0,
glBindTexture( GL_TEXTURE_3D, 0 );


glEnable( GL_TEXTURE_3D );
glBindTexture( GL_TEXTURE_3D, texture_ );

glBegin( GL_QUADS );
glTexCoord3d( 0.0, 0.0, zCoord_ );
glVertex3d( -halfSize, -halfSize, 0.0 );
glTexCoord3d( 0.0, 1.0, zCoord_ );
glVertex3d( -halfSize, halfSize, 0.0 );
glTexCoord3d( 1.0, 1.0, zCoord_ );
glVertex3d( halfSize, halfSize, 0.0 );
glTexCoord3d( 1.0, 0.0, zCoord_ );
glVertex3d( halfSize, -halfSize, 0.0 );

glDisable( GL_TEXTURE_3D );

Really basic things. But still... And as I said earlier, using a 2D texture doesn't make it better so it's quite a serious error.

Share this post

Link to post
Share on other sites
I calculated and NV is correct.
Try making a 1x1x6 texture so that it would be power of 2. Maybe that would fix it. Maybe not.

Share this post

Link to post
Share on other sites
Yeah, I've tried that. A 1x1x8 texture (1x1x6 as you suggested is not power of two...) should have a voxel-boundary at 0.875 and using a Nvidia card it has, but on ATI it lies somewhere around 0.8740-0.8741

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!