Jump to content
  • Advertisement
Sign in to follow this  
MoeTM

DX12 Directx 11, 11.1, 11.2 Or Directx 12

This topic is 790 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I recently put my directx contex creation to a dll und thereby i also wanted to upgrade my directx version. But i have some question regarding the version and how to use it properly.

 

1. Which version should i use to develop, version 11.3 (i need conservative rasteration) or 12 regarding driver robustness and cleanness of the api and does 11.3 hardware support 12 aswell?

2. If i want to use directx 11.3 and is not supported, though i use it partially using directx 11.2, how do i usefully call my directx functions.I mean i have an ID3D11Device2 or an ID3D11Device3.

3. If i use directx 11.3, can i combine/mixup calls like PSSetConstantBuffers and PSSetConstantBuffers1

 

Share this post


Link to post
Share on other sites
Advertisement
Hi. Can you explanation a bit more what you want to achieve?
Are you developing your own engine now on d3d9 or 10 and do you want to upgrade/ refactor?

Share this post


Link to post
Share on other sites

I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's

 

So is that how DirectX Development will work these days? DirectX 11 to get shit done relatively quickly and DirectX 12 to have multithreading capabilities? 

Share this post


Link to post
Share on other sites

but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else.

But if you already mastered DX11, DX12 shouldn't be that much different.
 

 

DX12 actually is quite different. Knowing DX11 is pretty much a requirement for starting off with 12 as certain concepts carry over, but all the handy higher level tools are stripped away so you have more fine-grained control.

 

One area I always like to bring up is resource binding; in 11 it's simply a matter of binding the shaders you need and calling ID3D11DeviceContext::XSSetShaderResources/SetUnorderedAccessViews/SetConstantBuffers/SetSamplers/etc, and you're good to go.

 

In DX12 it becomes a lot more complicated. First of all you start off with constructing a root signature, which raises the question of how you want to do root signature layouts. Want to do direct root parameters for constant buffers and structured buffers? Want to set up descriptor tables? Do you want constant directly embedded into the root signature? Static samplers? How many parameters can you fit into your root signature before it kicks into slow memory? What are the recommendations for the hardware architecture you're trying to target (hint: They can differ quite drastically)? How do you bundle your descriptor tables in such a way that it adheres to the resource binding tier you're targeting? How fine-grained is your root signature going to be? Are you creating a handful of large root signatures as a catch-all solution, or are you going with small specialized root signatures?

 

There's no general best practice here which applies to all cases, so you're going to want answers to those questions above. 

 

Once you have a root signature you get to choose how to deal with descriptor heaps. How are you dealing with descriptor allocation? How do you deal with descriptors which have different lifetimes (e.g. single frame vs multiple frames)? Are you going to use CPU-side staging before copying to a GPU descriptor heap?  What's your strategy for potentially carrying across bound resources when your root signature or PSO changes (if you even want this feature at all)?

 

Again, these questions will need answers before you can continue on. It's easy enough to find a tutorial somewhere and copy-paste code which does this for you, but then what's the point of using DX12 in the first place? If you need cookie-cutter solutions, then stick with DX11. No need to shoot yourself in the foot by using an API which is much more complex than what your application requires.

 

Have a look at this playlist to see how deep the root signature and resource binding rabbit hole can go.

 

 

This kind of stuff pretty much applies to every single aspect of DX12. Things which you could take for granted in 11 become very serious problems in 12. Things you didn't have to worry about like resource lifetime, explicit CPU-GPU synchronization, virtual memory management, resource state transitions, resource operation barriers, pipeline state pre-building, and a lot more become serious issues you really can't ignore.

 

If you're shipping an application, why go through the trouble of having to deal with all of this stuff when you know that an API like DX11 will suffice? As far as I'm aware, DX11.3 has feature parity with the highest available DX12 feature level, so it's not like you're missing out on any specific features, aside from potentially having more explicit control over multithreaded rendering (which is a massive can of worms in itself).

 

DirectX 12 is not something you need to use to write modern graphics applications. It's something you use when you know up front that you'll get some real gains out of it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!