GeForce 8800 GTS question...

Started by
7 comments, last by Demirug 16 years, 10 months ago
Hey guys! I want to replace my ATI vid card with an EVGA e-GeForce 8800 GTS, but I don't know if it will hook up to my CRT monitor (it uses a standard 15-pin VGA cable, not a DVI cable). Does the 8800 GTS include the adapter to connect it to my monitor? The reason I want to replace my vid card is because I'm getting texture alignment problems when my DX9 D3D 2D renderer is run on a system with an nVidia card, and my dev computer has an ATI vid card which displays my game perfectly. I want to be able to troubleshoot my engine on my own dev computer instead of wasting dev time sending a beta to my wife's computer (which has a GeForce 6800 in it) and testing it there. Efficient use of time is imperative, since I now have a 1-month old son that demands a lot of time - time I used to spend coding. D'oh! Thanks in advance for the info!
"The crows seemed to be calling his name, thought Caw"
Advertisement
My Club3D 8800 GTS was delivered with two DVI/VGA adapters. I expect that other manufactures will include these adapters too.
Quote:Original post by Demirug
My Club3D 8800 GTS was delivered with two DVI/VGA adapters. I expect that other manufactures will include these adapters too.


Yeah, I would be surprised if you didn't get at least one adapter with the card. Otherwise, they're not exactly hard to find [smile]

Thanks for the info, guys!

I'll go ahead and make the nVidia jump... Gotta fix those texture alignment problems, and I'm getting sorta tired of the lackluster ATI drivers anyway (that's a whole 'nuther story). And gotta take care of the baby. Speaking of which, sounds like he's wanting another bottle. Sheesh, that kid eats like a friggin' orcling! Orclet? Kid orc!

Thanks again! [smile]
"The crows seemed to be calling his name, thought Caw"
IMO replacing the ATI card with an NVIDIA one will leave you in a similar situation, where you're likely to write things for the ATI card that don't work on the NVIDIA one. At least now you have an accessible NVIDIA card in your wife's PC. What will you do when you want to debug ATI problems?

I'm not saying that there's a great solution for this, and I assume you want the card partly because it's a nice upgrade, but alternatives would be a second cheap PC with a cheap NVIDIA card, or two cards in the same PC (which might not be all that much fun, but worked last time I tried it -- which granted was a couple of years ago).
Thanks for the suggestion, ET3D.

My new PCI-E motherboard has an Intel built-in video which behaves similar to the Radeon card I used to have, except it'll only do Direct3D in software (which is good, because it's much less forgiving about sloppy renderer code)... I was thinking of pulling the monitor cord off the nVidia card and sticking it on the onboard video plug in order to test for other video card compatibility.

The only problem I had with my renderer on ATI cards was that at one point it wouldn't render at all, and it was because of a silly mistake in how I was rendering my indexed quads (if you had seen my renderer code, you'd think I failed math in middle school). But the problem I'm currently having with nVidia is trying to figure out how to align parts of one giant texture onto quads... Try to imagine four images on one tga file, and then rendering four quads with each image on each quad. A tiny bit of an adjacent image bleeds over into each quad, and it only happens on nVidia cards... It renders perfectly on ATI and every other video card I've tested. On ATI/everything else, the quads look like four pictures on the screen. On nVidia cards, it looks like four pictures each with a long seam on the left side and the bottom. Pretty strange stuff!

Anyhoo, I won't be abandoning other video test options, since I have my onboard video of which to test my game. It's just extremely time-consuming to compile a test, move it to my wife's computer, go to my wife's computer to execute it, and then go back to my dev computer to try another idea. Wash, rinse, repeat... Besides, when I'm almost done with my game and am ready to test for ATI compatibility, I'll yank the GeForce 6800 from my wife's computer and replace it with my old Radeon 9600 Pro. Think she'd notice? [grin]
"The crows seemed to be calling his name, thought Caw"
This sound like this problem: Directly Mapping Texels to Pixels
Yeah, that's what it is. I found that some time ago and I'm still trying to understand what they're talking about when 'mapping texels to pixels'. Here's a quote from the part I'm trying to understand:

... remember to subtract 0.5 units from the x and y components of your vertex positions when working in transformed screen space in order to correctly align texels with pixels.

Subtract 0.5 from a single pixel size? For instance, if my texture is 512x512, then should I subtract 1/(512*2) from that texture's u/v coordinates? Or do I subtract 0.5 from the u/v coordinate itself? The first almost works (still have a tiny bit of bleed-over), and the second doesn't work at all (shifts texture coordinates by one-half).

If you have a moment to help me, here's a simple scenario:

- My 512x512 texture contains four pictures, in a 2x2 pattern.
- I'm mapping each picture onto four quads.
- The first quad's upper-left/lower-right u/v coords are (0,0), (0.5, 0.5).
- The second quad's upper-left/lower-right u/v coords are (0.5,0), (1.0, 0.5).
- The third quad's upper-left/lower-right u/v coords are (0,0.5), (0.5, 1.0).
- The fourth quad's upper-left/lower-right u/v coords are (0,0.5), (0.5, 1.0).

How should I correctly incorporate the 'subtract 0.5 units from the x and y components' to make the texture appear in the quad with no bleed-over?

Thanks in advance for the help, I really appreciate it!

PS: Hope there aren't any mistakes in the above post... I'm typing in a hurry and don't have time to proof-read. Yikes!
"The crows seemed to be calling his name, thought Caw"
Don’t modify the uv coordinates. This seems a common misunderstanding.

You have to subtract the half pixel from the vertex position in screen space. If you use pre transformed vertices (D3DDECLUSAGE_POSITIONT or D3DFVF_XYZRHW) this means that you have to subtract 0.5. If your vertex coordinates are in homogenous clip space. (-1 to 1 for x and y) the width of a half pixel is 2 / screen width / 2. The height is 2 / screen height / 2.

If you don’t use pre transformed vertices you can subtract the half pixel using a translation matrix as part of your world-view-projection transformation chain.

This topic is closed to new replies.

Advertisement