When will DX11 become obsolete?

Started by
16 comments, last by Ravyne 7 years, 6 months ago

Good discussion from everyone. I asked a similar question about a year or two ago on another forum. That was "how many gamers don't have DX11 compliant video cards?". The responses were mostly about how DX9 is still relevant.....

Advertisement

You mean to tell me; in an industry whereby whether a game runs at 1080p 60fps is considered "news of the day", and DX12 promising to provide extra performance over DX11, that DX11 won't become obsolete when the XB1 production cycle ends?

Note that this is only true on consoles with CPUs that are 4x slower and GPUs that are 8x slower than a gaming PC.
Game performance on Windows PCs is an entirely different thing and not often news.

And are you telling me that there won't be a new API to take advantage of P100's architecture in the future (a few years down the road)?

Probably not. Dx9 and Dx11 both lived through some massive architectural overhauls by the GPU vendors, who's new features were exposed to developers with SDK updates or 3rd party extensions.
Microsoft actually announced at one point that D3D11 was "mature" and would be the final iteration of the API, standing for all time as they way to program any GPU... because it sufficiently abstracts away architectural details.

You will still likely be able to use DX11 by then, but it will be considered; hmm, what's the word I'm looking for, oh yes, "obsolete".

There's still games being updated from D3D9 to D3D11 right now... So by that history, for years after D3D13 comes out, we'll still see games updating from D3D11 to D3D12, or even maintaining their D3D11 renderer.

Unless something changes by then too, then I would still recommend newcomers start by learning D3D11 inside out before they learn D3D12 -- as above, these aren't the same kind of library that are competing with each other / replacing one another.
D12 makes D11 look like an off the shelf graphics engine in comparison... While D11 makes D12 look like a graphics driver framework.
So a hypothetical D13 would still need something like the D3D11On12 library to simplify the abstractions present to newcomers.

Now on the other hand, D3D10 is obsolete. It has the same level of abstraction as D11, the same hardware support and the same OS support, so there is literally no reason to use it any more.
If you're making an MMO with mass market appeal, then there's actually a solid business case for supporting D3D9 (see War Thunder - dropping WinXp would cost them something like 20% revenue).

Microsoft is pushing Win10 hard which uses DX12. XB1 uses DX11 so it will remain relevant until its production cycle ends, which is estimated to be about ~7 years, at which point their newest generation of consoles/windows might use DX12 or something else.

Actually IIRC XB1 has had D3D12 available for it for a while now. I think Star Wars Battlefront was the first game to use it on XB1.

As far as DX11 is concerned, I think there will always be a need for a simpler API so dx11 will be safe for a while to come... as well as what Matias mentioned in it still being updated.

-potential energy is easily made kinetic-

I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering. I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.

-----Quat

I wish they would add bindless texturing API to d3d11, as well as the "fast geometry shader" feature for cube map rendering. I think they could have done more for d3d11 to reduce draw call overhead without having to drop to the low d3d12 level.

Those extensions are not portable across every GPU vendor, so they're not the best candidates for adoption by the D3D11 core. If multiple vendors support a feature (but not all), then it often gets refined and brought in as a feature with multiple tiers - such as sparse/tiled resources.
You can use tiled texture resources in D3D11.2 to implement a variation of bindless texturing yourself, on compatible GPU's.
You can access NVidia's "fast GS" extensions on D3D11 via their extension API (NVAPI).
You can access AMD's multi-draw indirect extension to dramatically reduce draw count on D3D11 via their extension API (AGS).

Actually most of the D3D12 feature set is available to D3D11.3 developers.

Likewise, a hell of a lot of the D3D10 feature set is available to D3D9c developers via extensions.

There is the world where companies are promoting new products to sell and then there's the world where customers are resisting change.

DX9 hasn't had an update in 9 years. You are not going to buy new hardware hoping it will get more out of DX9 (unless you are feeling nostalgic and didn't have the latest card back in the day).

Multi-Cores, Pascal/Polaris and DX12 means companies are in the beginning-stages of moving on from DX11. Will DX11 still be supported and running? Of course. But it will be a niche market ~2021 onwards (like DX9 is today).

I don't know why I have to keep repeating myself. Down-votes only tell me that somebody is panicking for no reason. Most indie titles might not need to get the kind of performance DX12 is offering, but that doesn't mean you should run away from it (neither was the question limited to indie/niche markets).

And finally, this is something you can postpone for another 4 good years, at least. So stop worrying.

I don't know why I have to keep repeating myself.

It's because you're wrong and folks here have already explained why.

Down-votes only tell me that somebody is panicking for no reason.

I am as panicked as I am worried motorcycles will takeover cars because they're faster and consume less fuel (conveniently ignoring that cars can transport more people, are more comfortable, can withstand adverse weather conditions better, and have higher chance of survival in the even of an accident).

Most indie titles might not need to get the kind of performance DX12 is offering, but that doesn't mean you should run away from it (neither was the question limited to indie/niche markets).

Indies don't get scared away because of higher performance. They get scared away because of its high maintenance cost.
Not only it's more complex to code, you have to keep one codepath for each vendor (that means 3) because the optimization strategies are different. Not to mention you have to update your codepaths as new hardware is released, and if you did something illegal by spec that just happened to work everywhere, you may find yourself fixing your code when it breaks in 4 years because it's suddenly incompatible with just-released hardware.

Edit: Just an example:

  1. On NVIDIA you should place CBVs, SRVs and UAVs in the root signature much more often. Also avoid interleaving compute and graphics workloads as much as possible.
  2. On AMD you should do exactly the opposite. Avoid using the root signature (except for a few parameters, specially the ones that change every draw). Also interleave compute & graphics as much as possible.
  3. On Intel (and AMD APUs) you should follow AMD's rules, but also avoid staging buffers because host only memory lives in system RAM. The memory upload strategies are different.

Have fun dealing with all three without messing up. Also keep up with new hardware: these recommendations may change in the future.

D3D11 is safe for the foreseeable future -- there might come a time when the GPU vendors lose interest developing D3D11 drivers for new hardware, but their old hardware will continue to work; some time later, they might lose interest in maintaining D3D11 support in their active driver base, but the old drivers will continue to work; some time later, that hardware will effectively age-out of the mainstream computer ecosystem; finally, some time after that, Microsoft will lose interest in continuing to support that driver model and the OS components that support it -- that is when D3D11 goes away. Market forces will dictate when the API's become obsolete for new development some time before then. That's about all you can say for certain.

D3D11 and 12 are really two different things -- D3D11 is a model in which you opt into leaving some performance on the table in exchange for having the driver take over the responsibility of keeping the GPU moving along, while D3D12 is a model in which all the performance is yours for the taking, but also where you bare all the responsibility of keeping the GPU moving along as quickly as possible without tripping over itself. In the D3D11 model, the driver can't know about every piece of software, so it usually has to take a conservative approach to keeping the GPU going -- indeed, some of the vendor extensions are effectively just hints a developer puts into their code that signal to the they've thought it through already, and frees the driver to be less conservative. Likewise, for popular games, drivers will go so far as having particular code-paths and optimized shader code written by the hardware vendor -- all to have the best performance possible to make their products look good and attract more buyers. In the D3D12 model the driver does pretty much exactly what the developer says, even if its a bad idea for performance or flat-out crash-inducing -- in this world the driver becomes very thin because its only job is to translate between the vocabulary of D3D12 and the native tongue of the GPU hardware -- it does a little more than that in truth, but it doesn't take broad responsibility for how your app performs correctly or well, and it certainly doesn't jump through hoops optimizing your game for you. D3D12 also explicitly allows you to speak threading to the GPU, which D3D11 only speaks very coarsly -- one of the biggest performance drawbacks of D3D11 is that it has to conservatively put nearly everything on a single thread, because it lacks the information to fully understand dependencies and therefore can't re-order operations in significant ways.

I suspect what will happen in the market is that the number of people who need to write their own renderer (rather than buying one, or using an existing game engine like Unreal or Unity) but are incapable of grasping D3D12 is going to dry up pretty rapidly -- there simply won't be anyone who needs to develop against the D3D11 API directly anymore, you'll be a D3D12 developer, or an Unreal 4 developer, or a Unity developer, etc.

I also suspect that, for all the doomsayers, D3D12 really isn't so much hard to *use* as it is hard to *learn*. Once good learning materials are available and best-practices are disseminated through word-of-mouth and maybe some libraries, is D3D12 really harder to understand than memorizing all kind of arcane D3D11 rituals invoked to ensure the driver doesn't put performance in the toilet by being even more over-conservative than usual? Will it be easier or harder to understand why your performance sucks in D3D12 than D3D11 then? I suspect those answers will prove to fall in favor of D3D12 in the end, but we're not there yet -- the learning curve is steep, the boldest and most experienced people have only just mapped their own route up that hillside, and no one's put forth any plans to put any stairs in yet.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement