Jump to content

  • Log In with Google      Sign In   
  • Create Account

Directx 11, 11.1, 11.2 Or Directx 12

  • You cannot reply to this topic
6 replies to this topic

#1   Members   -  Reputation: 130


Posted 22 July 2016 - 04:55 PM



I recently put my directx contex creation to a dll und thereby i also wanted to upgrade my directx version. But i have some question regarding the version and how to use it properly.


1. Which version should i use to develop, version 11.3 (i need conservative rasteration) or 12 regarding driver robustness and cleanness of the api and does 11.3 hardware support 12 aswell?

2. If i want to use directx 11.3 and is not supported, though i use it partially using directx 11.2, how do i usefully call my directx functions.I mean i have an ID3D11Device2 or an ID3D11Device3.

3. If i use directx 11.3, can i combine/mixup calls like PSSetConstantBuffers and PSSetConstantBuffers1


#2   Members   -  Reputation: 4338


Posted 23 July 2016 - 04:55 AM

Hi. Can you explanation a bit more what you want to achieve?
Are you developing your own engine now on d3d9 or 10 and do you want to upgrade/ refactor?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

#3   Moderators   -  Reputation: 17739


Posted 23 July 2016 - 03:36 PM


So there's two separate concepts here that you need to be aware of: the API, and the supported feature set. The API determines the set of possible D3D interfaces you can use, and the functions on those interfaces. Which API you can use is primarily dictated by the version of Windows that your program is running on, but it can also be dependent on the driver. The feature set tells you which functionality is actually supported by the GPU and its driver. In general, the API version dictates the maximum feature set that can be available to your app. So if you use D3D11.3 instead of D3D11.0, there are more functions and therefore more potential functionality available to you. However using a newer API doesn't guarantee that the functionality will actually be supported by the hardware. As an example, take GPU's that run on Nvidia's Kepler architecture: their drivers support D3D12 if you run on Windows 10, however if you query the feature level it will report as FEATURE_LEVEL_11_0. This means that you can't use features like conservative rasterization, even though the API supports it.


So to answer your questions in order:


1. You should probably choose your minimum API based on the OS support. If you're okay with Windows 10 only, then you can just target D3D11.3 or D3D12 and that's fine. If you want to run on Windows 7, then you'll need to support D3D11.0 as your minimum. However you can still support different rendering paths by querying the supported API and feature set at runtime. Either way you'll probably need fallback paths if you want to use new functionality like conservative rasterization, because the API doesn't guarantee that the functionality is supported. You need to query for it at runtime to ensure that your GPU can do it. This is true even in D3D12.


Regarding 11.3 vs 12: D3D12 is very very different from D3D11, and generally much harder to use even for relatively simple tasks. I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's. And to answer your follow up question "does 11.3 hardware support 12 as well", there really isn't any such thing as "11.3 hardware". Like I mentioned earlier 11.3 is just an API, not a mandated feature set. So you can use D3D11.3 to target hardware with FEATURE_LEVEL_11_0, you'll just get runtime failures if you try to use functionality that's not supported.


2. You can QueryInterface at runtime to get one interface version from another. You can either do it in advance and store separate pointers for each version, or you can call it as-needed.


3. Yes, you can still call the old version of those functions. Just remember that the new functionality may not be supported by the hardware/driver, so you need to query for support. In the case of the constant buffer functionality added for 11.1, you can query by calling CheckFeatureSupport with D3D11_FEATURE_D3D11_OPTIONS, and then checking the appropriate members of the returned D3D11_FEATURE_DATA_D3D11_OPTIONS structure.

Edited by MJP, 26 July 2016 - 07:40 PM.

#4   Members   -  Reputation: 140


Posted 24 July 2016 - 12:42 AM

I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's


So is that how DirectX Development will work these days? DirectX 11 to get shit done relatively quickly and DirectX 12 to have multithreading capabilities? 

#5   Members   -  Reputation: 438


Posted 24 July 2016 - 08:30 AM



I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's


So is that how DirectX Development will work these days? DirectX 11 to get shit done relatively quickly and DirectX 12 to have multithreading capabilities? 



I suppose you could say that. Technically, you can "get stuff done" in DX12 relatively quickly if you've mastered it already. The "slow" part is learning DX12.


"That's progress for ya". DX10 & 11 are more difficult to learn than DX9. DX12 is more difficult to learn than DX11.


It's even more annoying for me because I'm trying to move from XNA, and I went to DX11 in order to get a foundation for OpenGL 4.5 and I'm currently rewriting my "engine" in OGL 4.5. And OGL 4.5 would probably be just fine for me and I could just stick with that. Except: progress. If I don't want to get behind the curve, it's time to start learning Vulkan soon. That's basically the OGL equivalent of DX12. It's so radically different that it's not even called OpenGL anymore but Vulkan. And to learn it, I figure I'm going to have to go backwards and learn DX12 in order to learn Vulkan. I expect there will be better instruction and more books on DX12 than on Vulkan much like DX11 compared to OGL 4.


OGL 4 is a lot easier than DX11, but you would never know that by reading the OGL books. They make it about 1,000 times more complex than it needs to be and I wouldn't have a clue what they are talking about if I didn't already know it from DX11. Learning DX11 in order to learn OGL was the right decision. You're learning something far easier, but basically there is very little good instruction in the OGL in terms of books. Although I have my complaints about the DX books, they do tend to be more informative and better written and most importantly far more numerous.


So, in order to learn Vulkan, I'm probably going to read Frank Luna's DX12 book. I already have it, just haven't had time to read it and more importantly I'm on Win 7 and need to build out an entirely new computer and install Win10 in order to do DX12. (Real shame, because I already compiled the code for a Vulkan tutorial on my old machine and Vulkan runs fine on my 3 year old graphics card in Win7. Microsoft is pretty aggressive in trying to force you to stop using Windows 7 in pretty much every way they can be.)


Another issue is that there's a lot more books out for DX11 than there are DX12. As far as I know, there's pretty much only Frank Luna's book for DX12, which is probably the one you would want, if you were going to pick authors of DX11 books to write a DX12 book. But there are about 4 times that many books for DX11. (Yes, that means there are 4 DX11 books. Ok. Maybe you can find 1 or 2 more, but 4 give or take a few.) It's a daunting task to learn DX just because there's so little info. I'm not even really that crazy about Frank Luna's stuff. He kind of assumes you're already intermediate level with DX. It's more like a text book where you are expected to get most of the knowledge from the instructor rather than the book. By itself, I never could learn this stuff from Frank's books. And I think his are the best. So, that says a lot. Fortunately, I already mostly know DX11, so I expect to get a lot out of his DX12 book.


But anyway, yep. DX11 is a bit easier to learn and if you already know it you can immediately crank stuff out. There's a lot more instruction out there to learn DX11. And you don't get into all the multi-threading and stuff which many programmers are terrified of. I haven't done any DX12 yet, but my understanding is there's a little more code for your engine in DX12, but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else. Your engine will have to know how to manage threads and stuff which will probably require several extra pages of code, but I don't expect DX12 to be more time consuming to work with than DX11 once you've mastered it and got your engine code written that you will reuse for every project you do. And even the engine code I would expect won't be that much more. Just more stuff to manage and a few more steps in getting things to render.


I noticed Frank's DX12 book really is not thicker or different from his DX11 book. You do things a bit differently. But if you already mastered DX11, DX12 shouldn't be that much different.


To "get stuff done relatively quickly" I would either use C# and MonoGame (or a game engine like Unity or Unreal) or C++ and OGL 4.5. But it seems to me that once you get your engine written it's pretty much all the same. MonoGame basically allows you to skip the part about writing your own game engine/framework and you're ready to get to work. OGL uses libraries for everything to the point of making it pretty quick to throw your game engine together. With DX11 and especially DX12, there's a lot more work putting it together without libraries. There is the DX Toolkit and whatnot, but my DX approach (especially for learning purposes) has always been to do pretty much everything from scratch and use nothing but standard C++ and DX itself. OGL lends itself to using libraries more than DX I think. You kind of have to use libraries with OGL to make it cross platform unless you want to learn the deep stuff on every platform in existence.  DX doesn't do cross-platform, so there's a lot less reason to use libraries and you only have to support Windows, making it far more practical to learn Win32 or whatever to actually do it as opposed to OGL where you call GLFW in order to manage processes and windows and such.


So, no. I would not recommend DX11 for "Getting stuff done quickly". But it's probably the best way to start learning DX12 since there's more info about it out there and it's slightly less complex. But even DX11 is kind of jumping off into the deep end of the pool. Most people stuck with DX9 for about a decade after DX10 came out because they were terrified to go to DX10 because of shaders and the increase in difficulty using it. (DX10 and DX11 are pretty much the same thing.) I think you can still find some people coding in DX9 out of fear to going beyond that.


To sum it up. DX11 is probably the way to go if you already know how to use vertex and index buffers as well as having a basic familiarity with HLSL. If you're not at that level of experience yet. You probably need to go get that experience somewhere before even trying to learn DirectX of any flavor. I used XNA to get there. Now that's been replaced by MonoGame. Once you know that stuff, you can learn DX11 relatively quickly (I pretty much taught myself at that point). And it's a good stepping stone to learn DX12. If there was an easy way to jump straight into DX12, I would just do that. But I don't think there's instructions out there that make that easy. Learning DX11 first gives you the prior knowledge to be prepared to take the next step into DX12. But I think if you know DX12, you won't find DX11 code any faster to write, or all that much easier really.

Edited by BBeck, 24 July 2016 - 08:49 AM.

#6   Members   -  Reputation: 915


Posted 25 July 2016 - 11:46 AM

Use processing or 3js to get stuff done quickly. Use DX12 to make your gpu do what it was fab'd to do.

#7   Crossbones+   -  Reputation: 4502


Posted 25 July 2016 - 12:42 PM

but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else.

But if you already mastered DX11, DX12 shouldn't be that much different.


DX12 actually is quite different. Knowing DX11 is pretty much a requirement for starting off with 12 as certain concepts carry over, but all the handy higher level tools are stripped away so you have more fine-grained control.


One area I always like to bring up is resource binding; in 11 it's simply a matter of binding the shaders you need and calling ID3D11DeviceContext::XSSetShaderResources/SetUnorderedAccessViews/SetConstantBuffers/SetSamplers/etc, and you're good to go.


In DX12 it becomes a lot more complicated. First of all you start off with constructing a root signature, which raises the question of how you want to do root signature layouts. Want to do direct root parameters for constant buffers and structured buffers? Want to set up descriptor tables? Do you want constant directly embedded into the root signature? Static samplers? How many parameters can you fit into your root signature before it kicks into slow memory? What are the recommendations for the hardware architecture you're trying to target (hint: They can differ quite drastically)? How do you bundle your descriptor tables in such a way that it adheres to the resource binding tier you're targeting? How fine-grained is your root signature going to be? Are you creating a handful of large root signatures as a catch-all solution, or are you going with small specialized root signatures?


There's no general best practice here which applies to all cases, so you're going to want answers to those questions above. 


Once you have a root signature you get to choose how to deal with descriptor heaps. How are you dealing with descriptor allocation? How do you deal with descriptors which have different lifetimes (e.g. single frame vs multiple frames)? Are you going to use CPU-side staging before copying to a GPU descriptor heap?  What's your strategy for potentially carrying across bound resources when your root signature or PSO changes (if you even want this feature at all)?


Again, these questions will need answers before you can continue on. It's easy enough to find a tutorial somewhere and copy-paste code which does this for you, but then what's the point of using DX12 in the first place? If you need cookie-cutter solutions, then stick with DX11. No need to shoot yourself in the foot by using an API which is much more complex than what your application requires.


Have a look at this playlist to see how deep the root signature and resource binding rabbit hole can go.



This kind of stuff pretty much applies to every single aspect of DX12. Things which you could take for granted in 11 become very serious problems in 12. Things you didn't have to worry about like resource lifetime, explicit CPU-GPU synchronization, virtual memory management, resource state transitions, resource operation barriers, pipeline state pre-building, and a lot more become serious issues you really can't ignore.


If you're shipping an application, why go through the trouble of having to deal with all of this stuff when you know that an API like DX11 will suffice? As far as I'm aware, DX11.3 has feature parity with the highest available DX12 feature level, so it's not like you're missing out on any specific features, aside from potentially having more explicit control over multithreaded rendering (which is a massive can of worms in itself).


DirectX 12 is not something you need to use to write modern graphics applications. It's something you use when you know up front that you'll get some real gains out of it.

I gets all your texture budgets!