• Advertisement
Sign in to follow this  

DX12 Directx 11, 11.1, 11.2 Or Directx 12

This topic is 543 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I recently put my directx contex creation to a dll und thereby i also wanted to upgrade my directx version. But i have some question regarding the version and how to use it properly.

 

1. Which version should i use to develop, version 11.3 (i need conservative rasteration) or 12 regarding driver robustness and cleanness of the api and does 11.3 hardware support 12 aswell?

2. If i want to use directx 11.3 and is not supported, though i use it partially using directx 11.2, how do i usefully call my directx functions.I mean i have an ID3D11Device2 or an ID3D11Device3.

3. If i use directx 11.3, can i combine/mixup calls like PSSetConstantBuffers and PSSetConstantBuffers1

 

Share this post


Link to post
Share on other sites
Advertisement
Hi. Can you explanation a bit more what you want to achieve?
Are you developing your own engine now on d3d9 or 10 and do you want to upgrade/ refactor?

Share this post


Link to post
Share on other sites

I would only go down that route if you think you'll really benefit from the reduced CPU overheard and multithreading capabilities, or if you're looking for an educational experience in keeping up with the latest API's

 

So is that how DirectX Development will work these days? DirectX 11 to get shit done relatively quickly and DirectX 12 to have multithreading capabilities? 

Share this post


Link to post
Share on other sites

Use processing or 3js to get stuff done quickly. Use DX12 to make your gpu do what it was fab'd to do.

Share this post


Link to post
Share on other sites

but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else.

But if you already mastered DX11, DX12 shouldn't be that much different.
 

 

DX12 actually is quite different. Knowing DX11 is pretty much a requirement for starting off with 12 as certain concepts carry over, but all the handy higher level tools are stripped away so you have more fine-grained control.

 

One area I always like to bring up is resource binding; in 11 it's simply a matter of binding the shaders you need and calling ID3D11DeviceContext::XSSetShaderResources/SetUnorderedAccessViews/SetConstantBuffers/SetSamplers/etc, and you're good to go.

 

In DX12 it becomes a lot more complicated. First of all you start off with constructing a root signature, which raises the question of how you want to do root signature layouts. Want to do direct root parameters for constant buffers and structured buffers? Want to set up descriptor tables? Do you want constant directly embedded into the root signature? Static samplers? How many parameters can you fit into your root signature before it kicks into slow memory? What are the recommendations for the hardware architecture you're trying to target (hint: They can differ quite drastically)? How do you bundle your descriptor tables in such a way that it adheres to the resource binding tier you're targeting? How fine-grained is your root signature going to be? Are you creating a handful of large root signatures as a catch-all solution, or are you going with small specialized root signatures?

 

There's no general best practice here which applies to all cases, so you're going to want answers to those questions above. 

 

Once you have a root signature you get to choose how to deal with descriptor heaps. How are you dealing with descriptor allocation? How do you deal with descriptors which have different lifetimes (e.g. single frame vs multiple frames)? Are you going to use CPU-side staging before copying to a GPU descriptor heap?  What's your strategy for potentially carrying across bound resources when your root signature or PSO changes (if you even want this feature at all)?

 

Again, these questions will need answers before you can continue on. It's easy enough to find a tutorial somewhere and copy-paste code which does this for you, but then what's the point of using DX12 in the first place? If you need cookie-cutter solutions, then stick with DX11. No need to shoot yourself in the foot by using an API which is much more complex than what your application requires.

 

Have a look at this playlist to see how deep the root signature and resource binding rabbit hole can go.

 

 

This kind of stuff pretty much applies to every single aspect of DX12. Things which you could take for granted in 11 become very serious problems in 12. Things you didn't have to worry about like resource lifetime, explicit CPU-GPU synchronization, virtual memory management, resource state transitions, resource operation barriers, pipeline state pre-building, and a lot more become serious issues you really can't ignore.

 

If you're shipping an application, why go through the trouble of having to deal with all of this stuff when you know that an API like DX11 will suffice? As far as I'm aware, DX11.3 has feature parity with the highest available DX12 feature level, so it's not like you're missing out on any specific features, aside from potentially having more explicit control over multithreaded rendering (which is a massive can of worms in itself).

 

DirectX 12 is not something you need to use to write modern graphics applications. It's something you use when you know up front that you'll get some real gains out of it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By VietNN
      Hi all,
      I want to copy  just 1 mipmap level of a texture and I am doing like this:
      void CopyTextureRegion( &CD3DX12_TEXTURE_COPY_LOCATION(pDstData, mipmapIndex), 0, 0, 0, &CD3DX12_TEXTURE_COPY_LOCATION(pSrcData, pLayout), nullptr ); - pDstData : is DEFAULT_HEAP, pSrcData is UPLOAD_HEAP(buffer size was get by GetCopyableFootprints from pDstData with highest miplevel), pLayout is D3D12_PLACED_SUBRESOURCE_FOOTPRINT
      - I think the mipmapIndex will point the exact location data of Dest texture, but does it know where to get data location from Src texture because pLayout just contain info of this mipmap(Offset and Footprint).  (???)
      - pLayout has a member name Offset, and I try to modify it but it(Offset) need 512 Alignment but real offset in Src texture does not.
      So what I need to do to match the location of mip texture in Src Texture ?
      @SoldierOfLight @galop1n
    • By _void_
      Hello!
      I am wondering if there is a way to find out how many resources you could bind to the command list directly without putting them in a descriptor table.
      Specifically, I am referring to these guys:
      - SetGraphicsRoot32BitConstant
      - SetGraphicsRoot32BitConstants
      - SetGraphicsRootConstantBufferView
      - SetGraphicsRootShaderResourceView
      - SetGraphicsRootUnorderedAccessView
      I remember from early presentations on D3D12 that the count of allowed resources is hardware dependent and quite small. But I would like to learn some more concrete figures.
    • By lubbe75
      I am trying to set up my sampler correctly so that textures are filtered the way I want. I want to use linear filtering for both min and mag, and I don't want to use any mipmap at all.
      To make sure that mipmap is turned off I set the MipLevels to 1 for my textures.
      For the sampler filter I have tried all kind of combinations, but somehow the mag filter works fine while the min filter doesn't seem to work at all. As I zoom out there seems to be a nearest point filter.
      Is there a catch in Dx12 that makes my min filter not working?
      Do I need to filter manually in my shader? I don't think so since the mag filter works correctly.
      My pixel shader is just a simple texture lookup:
      textureMap.Sample(g_sampler, input.uv); My sampler setup looks like this (SharpDX):
      sampler = new StaticSamplerDescription() { Filter = Filter.MinMagLinearMipPoint, AddressU = TextureAddressMode.Wrap, AddressV = TextureAddressMode.Wrap, AddressW = TextureAddressMode.Wrap, ComparisonFunc = Comparison.Never, BorderColor = StaticBorderColor.TransparentBlack, ShaderRegister = 0, RegisterSpace = 0, ShaderVisibility = ShaderVisibility.Pixel, };  
    • By lubbe75
      Does anyone have a working example of how to implement MSAA in DX12? I have read short descriptions and I have seen code fragments on how to do it with DirectX Tool Kit.
      I get the idea, but with all the pipeline states, root descriptions etc I somehow get lost on the way.
      Could someone help me with a link pointing to a small implementation in DirectX 12 (or SharpDX with DX12)?
       
  • Advertisement