Sign in to follow this  
Buttacup

DX11 GFX Cards

Recommended Posts

I thought they where already out and kickin', but appearantly not so much. I went shopping today and was pretty much laughed at.. <==== I'm used to this it happens fairly often :: Anyrate are the Radeon cards worth buying? Has anyone heard if NVidia is releasing theirs anytime soon? I wants to have CUDA so as so I can wait a month but if it's six I can flip a few bills in the meantime yus.... :P How is it that everyone is going DX11 when there are no cards??? The obvious answer for this to me would be they are only coding in DX11 but DX11 is defaulting to a DX10/10.1 implementation?? If my questions make you laugh please laugh after I have left the thread, I've been laughed at enough today..... m *v* m <==== it's a Koala damnit! With no ears.....

Share this post


Link to post
Share on other sites
All the AMD Radeon HD 5XXX series cards implement D3D11 features.

I had to wait for my hardware for a while after the launch, apparently because supply was low. However, I think the real reason was that the wholesalers and retailers desperately want to sell the older hardware stock away before the new cards.

I'm happy with my twin HD5870:s, they are insanely fast even in solo. The 5970 is approximately equivalent to two 5870:s, but I haven't seen a single one of those around here (Finland).

NVidia is in the process of launching their D3D11 parts, but I have no clue when they hit the streets - probably takes a while.

It is possible to target older hardware with D3D11, but you don't get new features such as tessellation or dynamic class linkage.

Share this post


Link to post
Share on other sites
NV's hardware has conflicting information;

- It could be May
- They might 'paper launch' in March with cards appearing slowly
- They might do a limited hard launch in march with hardware like gold dust
- The GPU might have massive power and clock speed problems due to TSMC's 40nm process not being great (unlike ATI/AMD NV didn't run a 'large' die though the 40nm process to learn about it)

NV themselves still aren't talking about clock speeds or power consumption and the only demos have been tightly NV controlled (cheery picked games, short snippets of an engine).

I'm not saying DONT wait for Fermi, but when it does appear it might not be all singing and dancing... we just don't know.
(honestly, imo, if it WAS all singing and dancing then NV would be making a bigger deal out of it than they are.. )

Share this post


Link to post
Share on other sites
Quote:
Original post by Nik02
It is possible to target older hardware with D3D11, but you don't get new features such as tessellation or dynamic class linkage.

Yes in fact you can target older hardware with D3D11 through the use of feature levels, and yes you you won't get the new features.

Personally I'm waiting for nVidia's D3D11 implementation, but I have to agree with Phantom's thoughts. Either way, I'm not wanting to jump on the bandwagon just yet, so it's worth the wait to decide whether nVidia or AMD/ATI has the best D3D11 GPU.

Share this post


Link to post
Share on other sites
Quote:
Original post by Nik02
I'm happy with my twin HD5870:s, they are insanely fast even in solo. The 5970 is approximately equivalent to two 5870:s, but I haven't seen a single one of those around here (Finland).


AHA! o.O

Wells, I went and bought an HD5770(econo class.) I figure this way I can implement DX11 and get my engine to a functional state. Maybe I'll buy a Fermi card in the fall when they settle in for the long haul, yus? I really wants the CUDA for my scientific endeavors, a Tesla would be a dream come true.... :P Oh wells I will make due with what I have. I'm still learning how to do the things I want to do so really it's a no waste scenario..... Maybe I'll buy a used Tesla one day. :D

I hope this makes Aion shine, I'm a try it out tonight after I install the card and 7.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Buttacup
I really wants the CUDA for my scientific endeavors
Is there a reason why OpenCL wouldn't work instead?


AHA! No..... I like NVidias mindset, truth, I have no real opinion on either ATI or NVidia beyond marketing and how others use their technologies as I have not personally used anything but the most generic of each. NVidia just seems to be more in tune with me as a consumer; Fermi and Tesla PRRRLoL@NVidia.... (where a PRRRLOL would be introspective of a Sheep LoL but more catty as a representation of me :D) And where OpenCL is concerned I just made the assumption that a more generic interface to GPGPU microtecture would result in well not more power.... and hence yus??? o-o

Damnit I need a new power supply..... no new techy for me 'till tomorrows :(

Share this post


Link to post
Share on other sites
Sign in to follow this