Sign in to follow this  

Do game developers still have any reason to support Direct3D 10 cards?

This topic is 667 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've lately been a little surprised that many new games (AAA and Indie) now require a Direct3D 11 card. Some examples: (these are the games that I first found or thought about)

Of course, that's 7 years of graphics cards right there. But why would game developers decide not to support Direct3D 10 cards anymore? Is it because most D3D10 cards don't have enough horsepower? Is it because it's not worth spending time just to support 10 year old GPUs? I do know you would need to replace stuff to keep D3D10 support, but still.

 

Not supporting Direct3D 9 cards is pretty understandable. I mean, come on, just take a look at the 10Level9 ID3D11Device Methods page at MSDN for example. Supporting D3D9 seems a pain in the A to me.

 

Of course, don't get me wrong, I'm not mad at anyone. I'm also a game developer in my spare time. I'm just wondering what reasons they have to do that. What would you do (assuming you're a game developer)? What reasons do you think they'd have to do such thing? What are your thoughts about all this? Do game developers still have any reason to support Direct3D 10 cards?

Edited by LHLaurini

Share this post


Link to post
Share on other sites

It's probably some combination of:

 

- The Xbox One and PS4 use D3D11 hardware, so for a cross platform game it makes sense to use the same feature set on all platforms.

 

- It would add more work to target D3D10 cards (which are missing some handy features that D3D11 has, especially if they are 10.0 instead of 10.1). Compute shader limitations spring to mind as an obvious example.

 

- D3D10 cards are older and therefore slower, for some games they wouldn't be fast enough even if they were supported.

 

- According to http://store.steampowered.com/hwsurvey only about 15% of Steam users have a DX10 capable PC, 85% are DX11 or better. Looking at the list of DX10 GPUs people have is also informative.

Share this post


Link to post
Share on other sites

Since in Ogre 2.1 we aim to support both DX10 & DX11 hardware, I've gotta say DX10 hardware's DirectCompute limitations are currently giving me a big PITA.

AMD's Radeon HD 2000-4000 hardware didn't even get a driver upgrade to support DirectCompute. So even if you limit yourself to the structured buffers available to DX10 hardware, these cards won't even run these compute shaders (despite the hardware being completely capable of doing so). I don't know what about Intel DX10 GPUs, but I suspect it's the same deal.

 

AFAIK only NVIDIA DX10 GPUs got the upgrade.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

I'd say D3D9 lasted for a good while because of D3D9/GL2 level hardware on consoles. So will D3D11.x for the same reasons.

 

D3D10 is unfortunate enough that didn't matched any hardware present in the console generation at the time.

Share this post


Link to post
Share on other sites
As has been said, D3D10 kind of falls into a donut-hole of uselessness -- you want to support D3D11 for sure, you probably want to support new APIs like Vulkan and D3D12 for improved performance, but if your target market demands even more sparten hardware support, you probably want to go all the way back to D3D9.

In some parts of the world there are still a lot if WindowsXP machines, which don't support D3D10. 10's only claims to fame are the fact that it was the best Windows Vista could do, and it introduced compute shaders but IIRC they were pretty limited. Between precious-little advantage over 9 and an even smaller OS market share, there's just no point.


For me, I'm looking at D3D12 for Windows UWP-packaged versions, and Mantle/Metal for versions packaged for the traditional desktop environments on Windows/Linux/OSX, with probably D3D10ish-level openGL as a fallback. I'm not even sure supporting D3D12 in the desktop-style version, or D3D11 at all is worth adding them to the support and testing matrix -- I'm inclined to leave them out.

Share this post


Link to post
Share on other sites

Are compute shaders supported on DX10 cards?

Yes, but... with a massive amount of restrictions, which make them about as powerful as pixel shaders running on a quad.

Real compute shader support comes with feature-level-11.

 

So yep, if newer games/engines implement a lot of their game/renderer using compute, then that code will run fine on PS4/Xbone/D3D11-level-PC's... but won't run on D3D10-level PC's. The dev then has to ask themselves: "Is writing a whole new renderer just to support D3D10-level-GPU's worth the extra market share that we'll get?", and I'm guessing a lot of the time, the answer is "no".

Share this post


Link to post
Share on other sites

Thank you very much to everyone who answered.
 

As has been said, D3D10 kind of falls into a donut-hole of uselessness

Yeah, that's the problem. D3D10 didn't introduce many useful features and D3D11 came only three years after.

 

Honestly I don't recall there being much developer support for Direct3D 10 when it was NEW, much less after Direct3D 11 came out.

When it was new there were some games that used it for more fancy features, like Bioshock: (there are the settings I use: Bioshock is pretty lightweight)

[attachment=30635:2016-02-14_00001.jpg]

 

Then D3D11 came out, but devs still wanted to support D3D9, so they required D3D10 only for some features. Here, Saints Row: the Third: (these are not my settings)

[attachment=30636:2016-02-14_00001 (2).jpg]

 

Are compute shaders supported on DX10 cards?

I'm pretty sure it's optional, which really sucks.

Share this post


Link to post
Share on other sites

The D3D9 vs D3D10/11 and D3D10 vs D3D11 is not exactly the same.
 
Supporting multiple DX versions means we need to aim for lowest common denominator. This cripples performance optimizations that are not possible because of the oldest path (unless we'd spent an disproportionate amount of resources to maintain two completely different code paths).
 
This means a game well-designed to run D3D11 will be significantly more efficient than a game that aims to run on D3D11, 10 & 9.
 
The D3D9 to D3D10 shift was a very peculiar one. It wasn't just performance, D3D10 introduced a few improvements that were very handy and easy to support for adding "extras" (note: these extras could've been easily backported to D3D9, but MS just wasn't interested).
For example: Z Buffer access, Z Out semantic, sending to multiple RTTs via the geometry shader, access to individual MSAA samples, separate alpha blending, dynamic indexing in the pixel shader, real dynamic branching in the pixel shader.
All stuff that made certain postprocessing FXs much easier. Therefore it was possible to offer DX10-only effects like you see in Bioshock that can be turned on and off, just to "spice up" the experience when you had a recent GPU running on Vista.
 
But moving from D3D10->D3D11... there weren't many features introduces, but those few features... oh boy they were critical. Let's take Assassin's Creed Unity for example: its frustum culling and dispatch of draws lives in a compute shader! We're not talking about an effect you can turn on and off. We're talking about the bare bones of its rendering infrastructure depending on a feature unavailable to D3D10 cards. Supporting D3D10 cards may mean as well to rewrite 70% or more of its entire rendering engine; which also likely will affect the asset pipeline and the map layout.

 

There are only a few D3D11-only things that can be used to spice up the graphics while still turning them off for D3D10, tessellation comes to mind.

Share this post


Link to post
Share on other sites

The D3D9 to D3D10 shift was a very peculiar one. It wasn't just performance, D3D10 introduced a few improvements that were very handy and easy to support for adding "extras" (note: these extras could've been easily backported to D3D9, but MS just wasn't interested).
For example:
1) Z Buffer access, 2) Z Out semantic, 3) sending to multiple RTTs via the geometry shader, 4) access to individual MSAA samples, 5) separate alpha blending, 6) dynamic indexing in the pixel shader, 7) real dynamic branching in the pixel shader.

You're misremembering how groundbreaking D3D10 was smile.png
Uptake of D3D10 was excruciatingly slow, as it didn't really introduce any killer new feature (geometry shaders did not live up to the hype), and came with the lack of XP-compatability, which was a big deal at the time. In my experience, a lot of people seem to have stuck with D3D9 until D3D11 came out. Most of the stuff you mention is accessible from the D3D9 API:
1) D3D9-era GPUs had multiple different vendor-specific extensions for this, which was painful. D3D10-era GPUs all support the "INTZ" extension (side note: you see a LOT of games that use the D3D9 API, but list the GeForce 8800 as their min-spec, which is the first NV D3D10 GPU -- my guess is because being able to reliably read depth values is kinda important)
2/5/7) Are in the D3D9 core API.
3) is a D3D10 API feature, but performance wasn't that great...
4) is a D3D10.1 API feature (and compatible GPU), but wasn't put to great use until proper compute shaders appeared in D3D11 smile.png
6) is emulatable in D3D9 but requires you to use a tbuffer instead of a cbuffer (as cbuffers weren't buffers in D3D9).
 
I've actually been toying with the idea of doing a D3D9/WinXP build of my current game, as a 10-years-too-late argument against all the D3D10 hype of the time laugh.png
You can actually do a damn lot with that API, albeit a lot less efficiently than the D3D11 API does it! I'd be able to implement a lot of the D3D11-era techniques... but with quite significant RAM and shader efficiency overheads. Still, would be fun to see all those modern techniques running badly on Windows XP (or an XP app, emulated on Wine!!).

Edited by Hodgman

Share this post


Link to post
Share on other sites

 

The D3D9 to D3D10 shift was a very peculiar one. It wasn't just performance, D3D10 introduced a few improvements that were very handy and easy to support for adding "extras" (note: these extras could've been easily backported to D3D9, but MS just wasn't interested).
For example:
1) Z Buffer access, 2) Z Out semantic, 3) sending to multiple RTTs via the geometry shader, 4) access to individual MSAA samples, 5) separate alpha blending, 6) dynamic indexing in the pixel shader, 7) real dynamic branching in the pixel shader.

You're misremembering how groundbreaking D3D10 was smile.png
Uptake of D3D10 was excruciatingly slow, as it didn't really introduce any killer new feature (geometry shaders did not live up to the hype), and came with the lack of XP-compatability, which was a big deal at the time. In my experience, a lot of people seem to have stuck with D3D9 until D3D11 came out. Most of the stuff you mention is accessible from the D3D9 API:
1) D3D9-era GPUs had multiple different vendor-specific extensions for this, which was painful. D3D10-era GPUs all support the "INTZ" extension (side note: you see a LOT of games that use the D3D9 API, but list the GeForce 8800 as their min-spec, which is the first NV D3D10 GPU -- my guess is because being able to reliably read depth values is kinda important)
2/5/7) Are in the D3D9 core API.
3) is a D3D10 API feature, but performance wasn't that great...
4) is a D3D10.1 API feature (and compatible GPU), but wasn't put to great use until proper compute shaders appeared in D3D11 smile.png
6) is emulatable in D3D9 but requires you to use a tbuffer instead of a cbuffer (as cbuffers weren't buffers in D3D9).
 
I've actually been toying with the idea of doing a D3D9/WinXP build of my current game, as a 10-years-too-late argument against all the D3D10 hype of the time laugh.png
You can actually do a damn lot with that API, albeit a lot less efficiently than the D3D11 API does it! I'd be able to implement a lot of the D3D11-era techniques... but with quite significant RAM and shader efficiency overheads. Still, would be fun to see all those modern techniques running badly on Windows XP (or an XP app, emulated on Wine!!).

 

Actually I did. I was using those screenshots about Saint Rows & Bioshock DX10-only features he posted:

  • Reflections. I guess they used GS to write to multiple RTTs at once. Otherwise it doesn't make sense to be DX10-only (from a technical point of view). While GS didn't live nowhere to their hype, that doesn't mean people didn't try. Probably they didn't gain performance. But porting it to DX9 would mean creating two codepaths (one for single pass using a GS, another for multi pass w/out GS). Note however, hybrids did actually improve performance. A hybrid would use instancing to multiply the geometry, a Geometry Shader to output to multiple RTTs, and still being multipass. Instead of writing to all 6 faces in one pass, write to 3 faces in 2 passes, or 2 faces in 3 passes. This kind of leverage allowed to find a sweet spot in performance improvement.
  • Ambient occlusion: Clearly they're doing a dynamic loop in the pixel shader which would explain why they'd need DX10. Or maybe they wanted Z Buffer access and didn't bother with the INTZ hack.
  • DirectX10 detail surfaces: I'm suspecting they mean multiple diffuse/normal textures overlayed on top of each other, taking advantage of array textures. Or maybe they enabled some Geometry Shaders somewhere for extra effects, like in a wall or something.

All of these features can definitely be done on DX9. But on the lazy side, you have to admit they're much easier to implement on DX10 (or like you said, doing it in DX9 would require more RAM or some other kind of overhead).

Like you said, DX10 wasn't that groundbreaking; but the features (that could've easily been backported to DX9, but weren't; save for vendor hacks like the INTZ one) that were added allowed games to include "turn on / turn off" kind of effects when running in DX10 mode.

Share this post


Link to post
Share on other sites

Hi!

 

An interestic thread.

I use the Direct3D11 API, but min. feature level 9_3.

One comment, that 9_1 ... 9_3 feature levels are maximum on some tablets.

For such devices it is a good idea to be back compatible.

Share this post


Link to post
Share on other sites

This topic is 667 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this