Hello!
I have a weird problem going on in my 2D videogame written in CPP and Direct3D... My textures map correctly on ATI video cards and don't on NVidia video cards! I'm going to try to describe the problem but it's sorta hard to explain, so let me know if you need any more info.[smile]
I have a 512x512 .tga file with four 256x256 images on it. Here's how I'm mapping the different parts of the .tga file to quads:
Verts[Idx+1].tu = ul_x;
Verts[Idx+1].tv = ul_y;
Verts[Idx].tu = lr_x;
Verts[Idx].tv = ul_y;
Verts[Idx+3].tu = lr_x;
Verts[Idx+3].tv = lr_y;
Verts[Idx+2].tu = ul_x;
Verts[Idx+2].tv = lr_y;
Verts[] is the vertex struct (CUSTOMVERTEX struct), Idx is the quad number (integer), ul_ is the upper-left texture coordinate (float), lr_ is the lower-right texture coordinate (float), and tu/tv are float variables of my CUSTOMVERTEX struct. Looking at this function, you'd think that sending it these variables would map the upper left image of the texture to the quad:
ul_x = 0.0f
ul_y = 0.0f
lr_x = 0.50f
lr_y = 0.50f
And it does, if you play the game on a computer with an ATI videocard. But if you play the game on a computer with a NVidia card, the lr_ coordinates bleed over into the adjacent image and you end up seeing texture ugliness.
Here's what really confuses me... What works for one brand of video card doesn't work for the other. For example, I can manually compensate for the bleed-over problem with NVidia cards by subtracting enough pixels to allow the correct mapping of an image, but then it introduces a problem with ATI video cards. Like so:
ul_x = 0.0f
ul_y = 0.0f
lr_x = 0.49609375f
lr_y = 0.49609375f
The above will correctly map the upper-left image of a texture to a quad on a NVidia video card, but will chop off the right and bottom of the same image on an ATI video card.
---
Any ideas on how to make my texture-mapping function work for both video cards? Thanks in advance for the help! [smile]