Archived

This topic is now archived and is closed to further replies.

Tauqeer

Best directx video card

Recommended Posts

hello guys. i want to upgrade my video card but i dont know which one is better and supports more functions so i need your opinion.

Share this post


Link to post
Share on other sites
the new, soon to be released, card coming out from ATI (Radeon X800) also rocks as much as the GeForce 6. it's a tad slower on some things in benchmark tests, and a tad faster in other (i.e. it averages to the same thing). the big plus is that the X800 actually runs with less power and produces less heat than the current 9800 series of cards. the GeForce 6, in contrast, runs hotter and requires 2 EXTRA power plugs in the card to run (you'll need to upgrade your PSU to at least a 450W unit to be able to use the card and not have your machine crap out all the time). this is sad b/c i've only ever owned NVIDIA cards and now i have to switch.

GeForce 6800 review:
http://www6.tomshardware.com/graphic/20040414/index.html

Radeon X800 review:
http://www6.tomshardware.com/graphic/20040504/index.html

-me

[edited by - Palidine on May 20, 2004 1:58:06 PM]

[edited by - Palidine on May 20, 2004 2:00:00 PM]

Share this post


Link to post
Share on other sites
The big minus with X800 is that it doesn''t have 3.0 shaders. And personally I would never even consider buying a card that doesn''t have that. And why? Because I want them

If there''s an issue with the powersupply being too weak, then I buy a new one.

- Benny -

Share this post


Link to post
Share on other sites
quote:
Original post by benstr
The big minus with X800 is that it doesn''t have 3.0 shaders. And personally I would never even consider buying a card that doesn''t have that. And why? Because I want them
ATI is waiting until Longhorn and shader model 4.0. Shader model 3.0 is nice, and has some nice new features, but it''s nothing revolutionary. Also, the X800 has the new 3Dc normal map compression, which allows for normal maps to be compressed at a 4:1 ratio. This can make any game look loads better.

Personally, I''d wait until Longhorn to throw the $$$ for a top-tier graphics card.


Dustin Franklin
Mircrosoft DirectX MVP

Share this post


Link to post
Share on other sites
As a developer, buying a future card that does not have SM3.0 is pretty much the same as buying a card now without programmable shaders (ie MX series). You are basically stuck with the old technology.

You will see die-hard fans of certain hardware companies try to argue this point, but the fact remains the Geforce6 is DAMN FAST and has SM3.0 for all your development and future gameplay needs.

Why work with a card that hinders you in shader programming?


I would buy one of the following which are amazing cards:

1. Geforce 6 series if you have money to burn and want some great 3.0 shaders.

2. Radeon 9800 XT or GeforceFX 5900 (pre-nerf version) or GeforceFX 5950. These are great cards based on SM2.0 and won't cost you an arm and a leg.

The new X800's are not bad.. but you basically get the same performance as the Geforce 6, and you are missing hardware displacement mapping, SM3.0, nVidia drivers, etc. There is no reason to buy a card for top dollar that is just giving you the same features as last year but a little faster. The 9800 or 5950 would be preferable to the X800 due to having the same features and being MUCH less on the wallet.


[edited by - Imperil on May 20, 2004 3:50:09 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by Imperil
As a developer, buying a future card that does not have SM3.0 is pretty much the same as buying a card now without programmable shaders (ie MX series). You are basically stuck with the old technology.

Well although I do agree that 6800s are cool cards (but what were they thinking with all the power and cooling stuff??), I think you're *slightly* overstating the shader stuff.

No shaders vs. 2.0 shaders is HUGE. 2.0 vs 3.0 is a lot more subtle. Actually the jump from 1.4 -> 2.0 is a lot larger than 2.0 -> 3.0 IMO. Furthermore I (and many others) have some major concerns about the use of some of the larger SM3.0 features, like dynamic branching, for performance reasons. Many of these things totally break the parallelism of GPUs which is the reason why we are doing this in shaders rather than the CPU in the first place...

Anyways I agree with Dustin in that I'm not convinced much will happen with the 3.0 shaders before we're already into the 4.0 model and Longhorn (that's only a year off remember), which promise to be a major update.

I'm not saying not to buy a 6800... just don't put too much emphasis on the shader support as it isn't likely to be an issue in either case.

Also noted is that with the X800s you get 3dc which is becoming increasingly important as practically everyone switches over to high quality normal maps. The feature WILL make a very noticable difference in the quality of any game that uses it. Furthermore when the 16-pipe X800 come out, it will probably pull away from the 6800 a bit more in terms of performance. Granted both are fast cards, but ATI seems to have the upper hand by a slight margin this time.

[edited by - AndyTX on May 20, 2004 4:53:56 PM]

Share this post


Link to post
Share on other sites
In a discussion that a site (I don''t remember its name) made with a lot of developers from great companies, those developers were asked about the VS3, is it useful for them, and most of them said that they cannot find a use for the VS3 instructions!.
Of course others(one of them was sweeny) said that the VS3 is very important and they can use it to make a lot of better things.
so you can not be sure that the GF6000 worth buying taking into account that the x800 is much faster.

Share this post


Link to post
Share on other sites
Can''t think of uses for 3.0 shaders!? Can''t be thinking too hard.
Displacement mapping is great. Dynamic branching is amazingly useful. How many programs have you made without an if or a for statement in? There are ways around in 2.0 but they cost quite a few instructions. With dynamic branching you can build more flexible shaders so what would be 4 shaders for different numbers of lights and parameters can now be just one. Some raytracing can be done so voxels can be rendered. More and more physics can be excecuted within the shaders: I have seen some demos of good cloth animation running on VS3.0. Texture lookups can be made in the vertex shader.
3.0 shaders will bring many improvements.

Share this post


Link to post
Share on other sites
quote:
Original post by Drath
Can''t think of uses for 3.0 shaders!? Can''t be thinking too hard.

I agree that vertex shader texture lookup is cool, but again - it makes more sense to wait for SM4.0 where the function of pixel and vertex shaders is much more unified to a single architecture.

But do remember that there still is the concept of things "better to do on the CPU", and "better to do on the GPU". The latter is the category of vectorized operations and extreme parallelism. This CANNOT be accomplished when different pixels can take different execution paths (eg. dynamic branching).

Don''t just assume that "everything that we can offload on the GPU is good news!". As graphics become more complex and people demand higher resolutions and quality (AA/AF,Tri,etc.), the GPU becomes a much more precious resource. Remember that we''re talking about a processor running at a fraction of the speed of our CPU; the only reason that we''re trying to offload ANYTHING is because the GPU is optimized for certain types of work. Next thing you know people will be trying to do AI on the GPU!

Again this doesn''t invalidate the need for new shader features, but it does suggest that perhaps we haven''t gotten the design right yet. From what I''ve seen of SM4.0, that is bringing us a lot closer.

Share this post


Link to post
Share on other sites
quote:
Original post by circlesoft
Personally, I''d wait until Longhorn to throw the $$$ for a top-tier graphics card.


Dustin Franklin
Mircrosoft DirectX MVP


Yes. However, when that day arrives, what is to say that buying a card then won''t feel like a waste of money because then there''s this other card just around the corner. And another one after that. I guess it depends on how much you are willing to/can spend and what you want from you card. The 6800 is my choice because I want the 3.0 shaders, not because I don''t want ATI. I don''t care if I have an ATI card or a nVidia card, as long as it can do what I want it to do

And when SM4.0 is here, I''ll just buy a card that supports that. Expensive, yes, but definately worth it, at least for me.

- Benny -

Share this post


Link to post
Share on other sites
As Drath said, 3.0 allows you to combine many different shaders into one. This can considerably reduce the batch overhead since you don''t need to switch shaders so often.

Share this post


Link to post
Share on other sites
The shader 3.0 model is great, the two main points of it are (as have been mentioned)

1) Vertex shader texture lookup
2) Stream frequency

Now, while the first one is a pretty novel feature and allows you to do all kinds of very cool and interesting effects, I''m guessing that not everybody in the world will have uses for it (unless they''re writing real cutting edge stuff). However, stream frequency is a fantastic feature that could really help performance across the board, and anybody that can use it probably should

That said, for now, I''ll be sticking with 2.0 shaders. I think that getting longer 2.0 shaders at better speeds than we have previously been able to (FX series with 1024 instructions but ran like a dog) will be a nice improvement that is probably more accessible to people than the high end SM3 cards right now.

In short:

If you want reasonable SM2 - Radeon 9800 series
If you want great SM2 - Radeon X800 series
If you want SM3 - GeForce 6 series

-Mezz

Share this post


Link to post
Share on other sites
Dont''f forget that NV40 also supports floating point texture blending and filtering which IMHO most important than SM3.0 for a game developer.

Share this post


Link to post
Share on other sites
If you''re taking this development seriously you need to get 3.0 now, then a 4.0 card when they arrive. O course that means you''ll never finish your game because you''ll just have figured out shader 3 when shader 4 comes along etc. If you''re making a game rather than just playing you need to have some kind of long-term plan as to when it''ll be done or at least the target specs.

Share this post


Link to post
Share on other sites
quote:
Original post by mohamed adel
...so you can not be sure that the GF6000 worth buying taking into account that the x800 is much faster.


*Much* faster? Where did you hear that nonsense? In actual fact the GF6800 actually beats the radeon in quite a few tests.



Share this post


Link to post
Share on other sites
quote:
Original post by Jx
quote:
Original post by mohamed adel
...so you can not be sure that the GF6000 worth buying taking into account that the x800 is much faster.


*Much* faster? Where did you hear that nonsense? In actual fact the GF6800 actually beats the radeon in quite a few tests.







Plus the fact that after this latest ATI marketing scandal (ie the filtering) it pretty much shows all of the benchmarks were completely scewed towards ATI even though the 6800 came out on top in a little over half the tests, which is pretty bad.



[edited by - imperil on May 22, 2004 2:03:55 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Mezz
The shader 3.0 model is great


The biggest problem that I see with Shader 3.0 isn''t with the model itselft, but rather, with it''s IHV Support:
Shader 2.0 Support:
GeForce FX 5950 U
GeForce FX 5900 U
GeForce FX 5900
GeForce FX 5900 XT
GeForce FX 5800 U
GeForce FX 5800
GeForce FX 5700 U
GeForce FX 5700
GeForce FX 5600 U (FC)
GeForce FX 5600 U
GeForce FX 5600
GeForce FX 5200 U
GeForce FX 5200
Radeon 9800 XT
Radeon 9800 Pro 256
Radeon 9800 Pro
Radeon 9800
Radeon 9800 SE
Radeon 9700 Pro
Radeon 9700
Radeon 9600 XT
Radeon 9600 Pro
Radeon 9600
Radeon 9600 SE
Radeon 9500 Pro
Radeon 9500 128
Radeon 9500 64

Shader 3.0 Support:
GeForce6

How are game companies supposed to use SM 3.0 if only one IHV supports it? They''re going to have to wait until both Nvidia and ATI are in sync with the Shader 4.0, Longhorn-generation cards.

Share this post


Link to post
Share on other sites
Going by Tom''s, it looks like the Geforce6 leads in some places, and the Radeon x800 in others.

It''s mostly a tossup where perforamance is concerned.

They''re priced identically, so, again - no issue there.

There are really 2 deciding factors:

1.) The Geforce6 is *HUGE* 1AGP + 1PCI slot, and 2 molex connectors. Major power drain.

2.) NVIDIA''s drivers are usually a lot more stable than ATI''s - it''s become less of an issue in the last year or so, but there''s still that "ugh" factor.

3.) the 16 v. 32 v. 24 bit accuracy thing (check Tom''s for more info if you don''t know what I''m talking about). This makes the Geforce6 perform much better on the extremely high end - but it gives the radeon a significant image quality advantage in mid-range and low-end games.

It''s not like either card is "bad", though.

Share this post


Link to post
Share on other sites
quote:

Remember that we''re talking about a processor running at a fraction of the speed of our CPU;



Fewer clock cycles, true - but consider that there are more (almost 50% more) transistors on a Geforce6 than on a P4.

As AMD has aptly pointed out time and time again - Clock cycles != processing power.

Share this post


Link to post
Share on other sites
quote:
Original post by Imperil
Plus the fact that after this latest ATI marketing scandal (ie the filtering) it pretty much shows all of the benchmarks were completely scewed towards ATI even though the 6800 came out on top in a little over half the tests, which is pretty bad.



That "scandal" is a joke. I''ve read far too many articles about it and have not seen one that adequately describes why I should care. Their filtering looks damn good and performs really well. So the reason I should be upset it is...what?

Stay Casual,

Ken
Drunken Hyena

Share this post


Link to post
Share on other sites
quote:
Original post by DrunkenHyena
That "scandal" is a joke. I've read far too many articles about it and have not seen one that adequately describes why I should care. Their filtering looks damn good and performs really well. So the reason I should be upset it is...what?


the problem is that you feel that ati is somehow not honest about what they say.
but you are true, this does not affect me.their image quality and performance are much more better than the corresponding cards of nVidia.
another important thing to note: the peroformance of the FX family (before the NGF6) is high in a lot of games that are used as benchmarks, but in other games.....
take for example, deusex 2: on gamespot the reviwes said that it run very slowly on 5900 at 640x480 ! and there's a lot of similar examples.
the architecture of nVidia's cards are different from that of the directx, the developers are trying to solve this in the drivers through optimizations and back doors, and any one who is following Directx dev mailing list can understand what I say.this FX series have been a disaster for developers of famous companies.
nVidia said ( when they released the GF6) that this is because they couldn't know the specifications of DX9 while the design of the 5800 ( which is not the case with the GF6),but some people said that they knew it but they didn't agree on it, so they made their own architecture which was uncompatible with DX9.
on the other side, ATI's lead devleoper Jason Michelle was one of the people who put the DX9 specifications with microsoft, so the architecture of the 9700 was completly compatible with DX9.
I don't how is the GF6 doing but for I think that the x800 will continue the success of ati cards, as no great comany is willing to support the VS3 in their games at this time.

[edited by - mohamed adel on May 22, 2004 9:30:40 PM]

[edited by - mohamed adel on May 22, 2004 9:32:35 PM]

Share this post


Link to post
Share on other sites
quote:

their image quality and performance are much more better than the corresponding cards of nVidia.



The only area where the x800 outperformed the Geforce6 in *ANY* benchmark that I''ve seen is when trilinear filtering was applied, using ATI''s "recommended" settings (i.e. disabling --the exact same optimization that they were using-- on the NVIDIA cards).

We''ll see what happens when fair comparisons are made with both cards running the optimizations.

No, it''s not an issue if their optimized filter DOES look just as good as "true" trilinear filtering - but it IS an issue if it doesn''t look any better than NVIDIA''s optimized version does. It''s deceptive marketing.

Share this post


Link to post
Share on other sites
quote:
Original post by Etnu

2.) NVIDIA''s drivers are usually a lot more stable than ATI''s - it''s become less of an issue in the last year or so, but there''s still that "ugh" factor.




It was the past. If you are a 3D engine developer you should know that new ATI drivers are much more stable than nVidia drivers. You just make a mistake f.e. trying to use more vertices than the size of the vertex buffer and the computer restarts with nvidia card. Or try to use a non dx8 vertex structure order with ffp and so on.

Share this post


Link to post
Share on other sites
I''ve never had those issues with any nVidia driver I''ve used - and that''s going back as far as the original Geforce256. I''ve frequently overrun the vertex buffers while writing my programs, and it was never an issue.

Actually, the only time I ever recally my system rebooting on it''s own during development was when I put a breakpoint in the middle of a lock -> unlock operation.

Share this post


Link to post
Share on other sites