Texture coordinate and mapping question (ATI vs. NVidia)

Started by
6 comments, last by Evil Steve 16 years, 11 months ago
Hello! I have a weird problem going on in my 2D videogame written in CPP and Direct3D... My textures map correctly on ATI video cards and don't on NVidia video cards! I'm going to try to describe the problem but it's sorta hard to explain, so let me know if you need any more info.[smile] I have a 512x512 .tga file with four 256x256 images on it. Here's how I'm mapping the different parts of the .tga file to quads:

Verts[Idx+1].tu = ul_x;
Verts[Idx+1].tv = ul_y;
Verts[Idx].tu = lr_x;
Verts[Idx].tv = ul_y;
Verts[Idx+3].tu = lr_x;
Verts[Idx+3].tv = lr_y;
Verts[Idx+2].tu = ul_x;
Verts[Idx+2].tv = lr_y;
Verts[] is the vertex struct (CUSTOMVERTEX struct), Idx is the quad number (integer), ul_ is the upper-left texture coordinate (float), lr_ is the lower-right texture coordinate (float), and tu/tv are float variables of my CUSTOMVERTEX struct. Looking at this function, you'd think that sending it these variables would map the upper left image of the texture to the quad:

ul_x = 0.0f
ul_y = 0.0f
lr_x = 0.50f
lr_y = 0.50f
And it does, if you play the game on a computer with an ATI videocard. But if you play the game on a computer with a NVidia card, the lr_ coordinates bleed over into the adjacent image and you end up seeing texture ugliness. Here's what really confuses me... What works for one brand of video card doesn't work for the other. For example, I can manually compensate for the bleed-over problem with NVidia cards by subtracting enough pixels to allow the correct mapping of an image, but then it introduces a problem with ATI video cards. Like so:

ul_x = 0.0f
ul_y = 0.0f
lr_x = 0.49609375f
lr_y = 0.49609375f
The above will correctly map the upper-left image of a texture to a quad on a NVidia video card, but will chop off the right and bottom of the same image on an ATI video card. --- Any ideas on how to make my texture-mapping function work for both video cards? Thanks in advance for the help! [smile]
"The crows seemed to be calling his name, thought Caw"
Advertisement
Have you read Directly Mapping Texels to Pixels?
Sirob Yes.» - status: Work-O-Rama.
Thanks for the reply, sirob!

I read it, but I don't understand how to utilize that knowledge... I know that pixels are dots and not squares, but I don't know how to mathematically incorporate that fact into my texture coordinate function. For example, Figure 2 in the link you provided shows a display with coordinates (-0.5, -0.5) and (4.5, 0.5), how do I turn that into a texture area? Where does the -0.5 come from, and from what is it -0.5? The screen resolution? The texture size itself? If I merely subtract 0.5 from 1/512 (512x512 texture) in my texture coordinate function, then there's no visible difference on either video card. But if I fudge the numbers until I find a figure that works, then it works fine on NVidia cards but chops parts of the texture off on ATI video cards.

What's got me scratching my head is the fact that when I get it to work on one video card, it does something adverse on the other video card.

Would you please explain how to incorporate the 'Directly Mapping Texels to Pixels' idea? Me still confoosd. [smile]
"The crows seemed to be calling his name, thought Caw"
Dookie, just subtract 0.5 from your vertex coordinates and keep your texture coordinates as they are; this should give you the proper mapping on most if not all video cards.

P.S. You haven't posted code where you specify vertex coordinates. If you are using D3DFVF_XYZRHW, make sure to subtract 0.5 from all four corners. If you are not using D3DFVF_XYZRHW, then you will find it very difficult to get the correct mapping, although I suspect that if using shaders, you could do this after transforming the vertices.
Hello,

I ran into similar problem which I thought would be solved using knowledge from the MSDN article posted by Sirob and offseting the quad by (-0.5, -0.5).

What do I do is just rendering a fullscreen textured quad at 640x480 using post-transformed vertices. I aligned the quad with screen pixels as described in the article mentinoed. Just for the sake of exactness, vertex coords of the quad are: (-0.5, -0.5, 639.5, 479.5) (being the left, top, right and bottom coordinate).

I mapped a texture onto the quad of the same size as the screen size (640x480). I prepared a simple texture where potential artifacts would be easily observable. Here's link to the texture:

http://kutny.net/640x480_test.png

But for some unknown reason the rendered image was very blury, like if the pixels and texels were still misaligned and color interpolation occured. Here's link to screenshot of the rendered quad:

http://kutny.net/fsquadat640.png

- very blurry indeed. But then, just out of curiosity, I tried to make the texture a power of 2 sized (1024x1024) (not by resizing the texture, but just resizing the "canvas size"), then corrected texture coordinates of the quad, and voila the image rendered as sharply as the original texture.

Does anybody know why this strange behavior occurs when using non power of 2 textures? I already checked that my graphics card (radeon x1950) supports non power of two textures, so this theoretically should not be a problem right?

Martin
Dinsdale, you are most likely loading your textures with D3DX. Upon loading, D3DX resizes the texture to the nearest power of two, so your final image contains distortion.

You can use D3DX_DEFAULT_NONPOW2 flag when passing width and height to D3DXCreateTextureFromFile and similar routines to avoid the problem.
Thanks Lifepower! That's exatly my case (I'm using D3DXCreateTextureFromFileEx to load textures) and it solved my problem. It was also necessary to set MipLevels to 1 when loading the texture.

But after further investigation I found out that more criteria has to be met for non power of 2 textures to work correctly (citing the DirectX SDK):

- The texture addressing mode for the texture stage is set to D3DTADDRESS_CLAMP.
- Texture wrapping for the texture stage is disabled (D3DRS_WRAP n set to 0).
- Mipmapping is not in use (use magnification filter only).
- Texture formats must not be D3DFMT_DXT1 through D3DFMT_DXT5.

So I guess the best thing is to avoid non power of 2 textures if I want my application to be robust.
Quote:Original post by Dinsdale
So I guess the best thing is to avoid non power of 2 textures if I want my application to be robust.
Yep. I avoid npow2 textures like the plague. Mainly because a lot of cards don't support them at all (Older cards mainly).

This topic is closed to new replies.

Advertisement