DX maybe dead before long....

Started by
41 comments, last by Ravyne 13 years, 1 month ago
http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

Comments....

Mine is,
I been saying DX is a dog for years, all the DX nut jobs, no its fast your doing something wrong… Bah eat crow…

Anyway I am for whatever gets us the best IQ and FPS on the hardware gamers spend their hard earned money for.

Advertisement
*sigh*

DX might be a 'dog' and it's a known fact it has overhead but it still remains the best we've got right now. Would a lower overhead API be nice? Sure.

The problem with that piece is two fold.

Firstly, with regards to DX11 performance, the small fact they fail to mention is that multi-threaded rendering isn't helping because neither AMD nor NV have got drivers in a state to allow it. NV have something but they basically say 'yeah, give us a core' which you can't use for your game. AMD don't even have that. Everyone is seeing performance loss when going MT with DX11 right now, so for an IHV to come out and say 'DX11 has too much overhead' when they don't have drivers to expose something properly... well.. lulz? (Current sitrep; AMD/NV blame MS. MS blame AMD/NV).

Secondly, it goes on to say 'look what they can do on consoles, clearly CTM is the answer!' (para-phrased). The problem with this is that consoles are known hardware types, yet it takes YEARS for the best to be extracted from this KNOWN hardware. Graphics cards also tend to change up every 18 to 24 months.

In fact, HD5 to HD6 for AMD, which was about 12 months if memory serves, went from VLIW5 to VLIW4 under the hood. Effectively it lost the ability to co-issue certain instructions which means that if you've spent a year hand tuning code to that hardware BAM! the work was wasted and you have to start again.

And that's just one generation; if we assume there are at any given time 4 generations of cards from both vendors in the wild that is effectively 8 pieces of target hardware with differing internal details all of which you'd have to deal with yourself (without including Intel in the mix AND a problem which will get worse when you factor AMD's APUs into the mix)

A matter made worse if the CTM API can't be agreed between NV, AMD and Intel; suddenly you might end up supporting N-APIs all with their issues across X pieces of target hardware.

The great irony from that piece is that it complains about 'games looking the same', but with a situation they outline it means that cutting edge games will just continue to license existing engines making the problem WORSE. Or AAA games will drop PC support as 'not worth the bother', which isn't as unlikely as you might think given how hard it can be to convince the higher ups to drop DX9 support in favor of DX11 for a game coming out in 24+ months.

Basically all that will happen is either a new API will appear to fill this need or MS will adapt DX to this.

Yes, we'd love an API which has a lower overhead, however at the same time you need something to deal with the scope of hardware out there.

Until someone comes up with such an API or DX adapts to this need DX11 is the best we've got.

edit;
And the change of APIs is nothing new.

First it was custom APIs, then OpenGL held the crown, now DX holds it, so it's only natural that at some point this will change.
Aside from the issues Phantom pointed out, the article actually does a fairly good job discussing the pros/cons of APIs like DirectX vs going Close To Metal (CTM hereafter).

The issue is really one of scale -- there's first the trend that PC games has been, recently, a shrinking market. A hit PC game might sell N units, but a hit console game sells probably around 10x that. So the equation you end up with is that roughly 10% of your sales come from a platform that represents 8 "primary" GPU configurations (4 generations from 2 vendors, as the article suggests) -- not to mention various speed-grades and other non-architecture differences that have to be accounted for -- versus 90% of your sales coming from a set of 2 platforms representing exactly 2 GPU configurations (I'm ignoring the Wii here, mostly because it doesn't compete in the same space as enthusiast PCs, PS3s and 360s) -- and having precise knowledge of speed and capability. In other words, you get perhaps 9x the return for approximately 1/4th the effort -- That's a potential return on investment factor or 36x, for those who are following the math at home. Now consider that, even on the closed platform of consoles, going CTM isn't a no-brainer decision. Many games still use the APIs for light-lifting, and non-bleeding-edge games eschew it entirely.

This clearly indicates that simply exposing the metal on the PC isn't going to change the situation for the better. We have to regain some ROI in order to make the option more appealing to devs. We can't do much to increase the size of the PC gaming market directly (the "return" portion of the equations), so we have to attack the investment part of the equation -- and to do that, we have to reduce the number of platforms that we have to pour our efforts into. Our options there are to abstract some things away behind layers of software APIs (OpenGL, Direct3D, higher-level libraries or engines), or we have to reduce the hardware differences (or at least the programming model, as the x86 and its derivatives have done long ago, and today, internally, is a RISC-like processor with hundreds of registers.) There's really no win here for innovation, BTW, we're just giving buying a more-flexible software-level API at the expense of imposing a stricter hardware-level API -- this is, perhaps, the way to go, but its important to at least acknowledge what it is, because going down that path has the potential to stifle hardware innovation in the future, just as OpenGL and Direct3D have stifled software innovation in some ways.

Programmability is probably the key to regaining some ground on the investment front -- Today, APIs like OpenCL or CUDA are seen as somewhat imposing -- you have to fit the algorithm to their programming model -- but ultimately I think it will lead toward a loose hardware standardization of sorts -- paving the way for the thinner APIs of the future. Larrabee, for all its shortcomings as hardware, will also prove to have been a very important R&D effort -- having instigated research on how to write software rasterization against very wide SIMD units and across a massively parallel system -- they also identified new, key instructions with applicability not only to graphics but to many parallel computations. I don't know if something like texture sampling will ever be as programmable as a modern shader (though perhaps as programmable as the fixed-function shaders of yore), at least efficiently; but I think we'd be in a pretty good place if texture sampling was the least programmable hardware on the GPU.

My take is that Direct3D and other APIs will evolve into a thinner API, or perhaps be supplanted by a thinner API, but we will never be able to give the API abstractions away on the PC platform. The PC ecosystem is simply too diverse, and always will be, to support a CTM programming model. I think its probably fairly likely that CPUs will eventually go the same route that the x86 took -- meaning that the programming model GPUs will expose will bear little resemblance to the hardware internals; in some sense, this is already true, but current models expose too much detail of what goes on (which I realize is opposite of what the article claims that devs want) -- for example, with explicit caching and private/shared data areas. There's much work to be done by GPU vendors in developing a model which elides such details while also allowing such resources to be employed efficiently behind the scenes, and much work to be done by them, along with API vendors, to define APIs which help the hardware use its resources most efficiency without being so explicit about it.

throw table_exception("(? ???)? ? ???");

DX isn't needed as such anymore. At one point it was everything and kitchen sink.

But as GPUs developed, everything moved to shader programming. The abstractions introduced by old pipeline have become redundant in favor of shaders for everything.

The old view of mega frameworks that run on full spectrum of hardware has also become mostly irrelevant. While technically possible, it has little market impact. OGL, trying to abstract platform completely, isn't doing many favors to developers to whom emulated pipeline doesn't help, especially if these details are hidden by driver.


A very viable model today is something like OGL-ES. Don't build frameworks, libraries, and everything else. Those are best left to users or engine developers. Be a ubiquitous simple hardware-friendly API aimed at tasks performed by GPUs.

This change in focus would be a good thing. Do one thing, but do it well and think of the hardware. Developers will quickly adjust, engines will be able to do less of data juggling and there will be less bloat which isn't needed at that level. After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...

DX (and perhaps OGL) is in same position as WinAPI. There are two companies that actually still need to know it. The rest builds on top of third-party engines (not graphics pipeline frameworks) that add actual value to problems that need to be solved.
The more things change, the more they stay the same. Technology advances typically outstrip the ability to use them.

I began programming when hex/octal/binary was required, stuffing bytes into memory. Then good assemblers helped the effort. Then interpreters (e.g., Basic) were the rage. They provided a layer between the software and the hardware. Compilers speeded up things even more so programmers could take further advantage of improvements in technology, often to take advantage of specific hardware improvements. As mentioned in the comments to that article, the game world was rampant with "You must have a super-duper-whammy sound card to run this game."

APIs (OpenGL, Directx, etc.) appeared on the scene to help integrate entire applications, providing an even more generalized isolation layer between software and hardware. Although less now than in the last few years, common solutions to problems are still "I updated my driver and now it works." However, one big benefit of those APIs was to force manufacturers to design to common interface standards. Without that impetus, shaders would be, if not a lost cause, in the same category as hardware-specific drivers.

Dropping the API? Pfui. AMD would certainly like it if the world reverted to "You must have an AMD super-duper-whammy graphics card to run this game." But, in agreement with phantom, I don't think for a second that'll happen tomorrow or the next day. DirectX and OpenGL will be around until something better comes along, adapting (as they have in the past ) to take advantage of technology.

"Something better" will certainly come along, I'm sure. But, for the time being, I'll take my "chances" investing time in DirectX and OpenGL programming.



Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

After all, nobody programs DX anymore. They use Unreal, UnrealScript + custom shaders. Or Unity. Or Flash. Or ...


That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to. All that you're doing is moving up one level of abstraction, but the very same messy details still exist underneath it all (and still have to be handled by a programmer somewhere - and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)

What I believe the crux of the matter is is that there has been no real innovation on the hardware front in almost 10 years: sometime roundabout 2002/2003/2004 hardware suddenly stopped being crap (this is a generalisation; of course there's still plenty of crap hardware about) and since then it's just been a continuous ramp up of performance. After all, even a geometry shader is something that can be handled by the CPU; where's all the new paradigm-shifting stuff? The last real break from the past we had was programmable shaders.

On account of this it's natural for some measure of uneasiness to settle in. APIs are offering nothing really new so why do we need them, etc? This is gonna last until the next big jump forward, which might be real-time accelerated raytracing or might be something else; I'm not a prophet and don't know. But think about the current situation as being akin to the gigahertz arms race of yore in CPU land and it makes a little more sense.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I would pay twice the performance cost for what DirectX11 is offering... in fact, I used to... it was called DirectX9. And I still pay that, because there is still a market for it.

Go tinker with some open source graphics drivers and see how much fun it is to support multiple chipsets. In the meanwhile, all your competitors will release several games that "just work" and are beneath your expectations.

Sounds silly right? You are a magnificent engineer! Surely, you can work up a nice abstraction layer that reduces the amount of work you repeat. Congratulations, you have invented DirectX. Except now, YOU have to maintain it. YOU have to support the next Intel-vidi-ATI card. YOU have to reverse engineer it, because the specs are confidential multi-million-dollar trade secrets.

The economics of this situation are so bad; this article is trash.

Games don't all look similar because of a tech hurdle... they look similar because people will buy that aesthetic.

That sounds a lot like the same fallacious argument that you frequently find made about managed languages: "nobody programs in C/C++ anymore, these days it's all Java/.NET/Ruby/insert-flavour-of-the-month-here/etc". Where it falls apart is that Unreal, Unity or whatever ultimately have to be written too, and these need an API to be written to.
Which is why I wrote a sentence later, only two companies in the world still need access to that low level.

and believe me that we are truly f--ked if we ever produce a generation of programmers that knows nothing about the low-level stuff. Who is gonna write the device drivers of tomorrow? That's what I'd like to know.)[/quote]And that is the same falacious argument made by low level people. With ever higher levels of abstraction, we'd still be weaving memory, rotating drums and punching cards.

The same was said about functions (see goto considered harmful, a blasphemous article of that time whose true meaning has been completely forgotten and is misunderstood each time it's quoted). They were bloated, abstract, inefficient, limiting. Same was said about everything else. But for every guru, there is one million people earning bread without such skills.

Where are the low-level people who program by encoding FSMs? For the rare cases when they are needed, they are still around. But most developers today don't even know what a FSM is.

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

That was the crux of my argument, since most of functionality once used by majority of developers provided by DX isn't needed anymore at that level. It's either higher or lower. It's more about fragmentation, like everywhere else. Rather than having one huge monolithic framework one uses small one-thing-done-well libraries.

To low level people, DX is a pain and inefficient (so is OGL or any other API). To high-level people DX (et al) is too low level.

So true.

I am a reformed low level person. At this point, I consider my previous-self foolishly arrogant and woefully mistaken about the important things in game development.

I was:
arrogant, because I insisted that I could do it better.
mistaken, because I thought that small edge was worth so much of my time.

When is the last time anyone looked at crysis and said, "If these graphics were 5% better, this game would be way more fun".
Games all look similar as most games are using the same damn engine or a select few... e.g. Unreal3, Source, ect...

And the other problem is most programmers or whoever is coding these shaders are just copying and pasting crap they found on the net or what other engines are using.

This topic is closed to new replies.

Advertisement