Sign in to follow this  
arjansingh00

DX12 Directx 11 For Small Scale And Directx 12 For Large Scale.

Recommended Posts

arjansingh00    203

So I recently read a few posts on DirectX 11 vs 12 and based on everything I've read, I've come to the conclusion that DirectX 11 is good for mostly all tasks and you should only use DirectX 12 to get more control over optimizations if you think you'll benefit from it, eg if your doing a large scale game. 

What are your thoughts guys? Am I looking at this completely wrong? Should we all just use DirectX 12 and forget about 11?

Share this post


Link to post
Share on other sites
rave3d    691

DirectX11 is still good and state of the art.

Also for beginneres I would recommend D3D11 because ists much easier.

 

DirectX12 and Vulkan you should only use if you are an experienced graphics programmer how knows what he's doing.

 

I believe 80% of Hobby or Indie Game Projects won't ever come to a stage where they really benefit from the advantages of DX12 or Vulkan.

Even AAA Titels like Rise of the Tomb Raider don't get a real performance boost if you have a fairly modern CPU working.

The optimazations of these APIs take the most effect if your game is CPU limited because of the lower driver head.

Edited by mgubisch

Share this post


Link to post
Share on other sites
BBeck    826

I posted on this topic just the other day. But I think the decision of whether to use DX11 or DX12 is mostly a question of whether your user's computers can support DX12.

 

I mean, yes, DX12 is more difficult than DX11. DX11 is more difficult than DX10. DX10 is more difficult than DX9 and OpenGL. OpenGL is more difficult than MonoGame. MonoGame is more difficult than Unity. If you want easy: Unity. But DX11 is so difficult I wouldn't say it's really "easier" than DX12. Depending on where you're coming from knowledge wise, DX11 is an enormous learning curve. I say this never having written a DX12 program and only having compiled a Vulkan tutorial. But I think saying DX12 is more difficult than DX11 is like saying it's more difficult to walk to New York from LA than it is to walk from LA to Boston. By the time you've managed to walk to Boston, New York isn't that much further. You've already made it to Boston. If New York is where you really want to be then why not just keep going?

 

I think there's more opportunity to mess up in a really big way and have your game crash in DX12. (But then again you were working with unmanaged code and COM in DX11.) Multi tasking always scares people. So, it's definitely more difficult. But by the time you learn HLSL just so you can draw something to the screen, figured out how to deal with the Windows OS (at least so that you can get a process and control the window you are running in), and dealt with COM, "you've come a long ways baby" (and that doesn't even get into writing your own Python scripts to extract modeling data from Blender, writing your own modeling class, learning Win Sock for Internet game play, etc.). If you can handle that, I figure multi-tasking can't be that much more difficult.

 

From what I read, DX12 handles resources far better than DX11, but it means extra steps on your part to take responsibility for those resources. But that's pretty much the same difference as DX9 and DX10/11.

 

But as DX becomes more complicated, it's a shame it becomes more difficult to learn to do. Then again, I'm not sure DX has ever been easy. I tried for the better part of a decade to learn 3D programming in DX9 with no success to speak of. By the time I was actually ready, I ended up teaching myself DX11. And I found it reasonably easy. But that was only because I had over a decade of experience elsewhere. So, maybe it is good to learn in stepping stones.

 

Overall though, it seems to me that if you are going to learn DX12, you should probably use it for everything unless a more simple tool is called for. But if a more simple tool is called for DX11 is probably too much too. For example, I needed a tool to read my model files to examine the data in a human readable format so I wrote a C# program. I might prototype something in Unity or what I actually use is XNA for prototyping. Several things that I was worried about tackling directly in DX11 I prototyped in XNA first.

 

Again though, I've never written a single DX12 program. The closest I've come to it is program in DX11 and spending a Saturday compiling a Vulkan tutorial. I've also flipped through Frank Luna's DX12 book and found it to be remarkably similar to his DX11 book.

 

I would think that if you're going to do DX12, just do it and stick with it. At that point you've already dealt with the difficult part and you probably need the practice anyway even if it is a smaller project.

 

Another way of looking at it though is that most of your beginner projects probably would run just fine on DX9. I used XNA for years and it was built on top of DX9. Any time I ran into any type of performance issue it was because of the way I had coded it, not a real problem of exceeding it's limitations. So, you might learn DX11 before DX12 just because there's so much more information out there to help you learn DX11 and it's a pretty decent stepping stone to DX12.

Edited by BBeck

Share this post


Link to post
Share on other sites
rave3d    691

@BBeck:

 

I don't really agree with you. The only thing: If you want it really simple take Unity or Unreal Engine.

 

DX11 hides a lot of work for you, which you need to take care of yourself in DX12 or Vulkan. If you are fresh to any Graphics API I believe DX11 or OpenGL are the far better choises instead of Vulkan/DX12. If you already know the other APIs and know where your bottlenecks are and why the are there, the new APIs will be the right choice. Otherwise if never touched an API like DX or OpenGL someone should stick with the much simpler and well documented Dx11 or OpenGL APIs until there's a fair understanding of how the graphics card and all the stuff is working.

 

If someone starts programming usally no one recommends to start with assembler to make a simple text based console game.

 

Regarding Hardware support: If you set the desired feature level all DX11 HW can run your DX12 Programms and you get still some of the advantages of DX12.

Share this post


Link to post
Share on other sites
arjansingh00    203

Titels like Rise of the Tomb Raider don't get a real performance boost if you have a fairly modern CPU working.
Yeah I kinda noticed that while playing, but it's anyway an unoptimized title. Your also kinda right, because if you've played the new star wars battlefront by DICE powered by the frostbite engine, that's on DX11 and is probably the most optimized title I've played. With the studios next title Battlefield 1 (which runs on either DX11 or DX12, whatever you choose), which in my opinion is the best looking game should compare the performance difference between the two API's and see if there is a good performance increase or not.  

Share this post


Link to post
Share on other sites
arjansingh00    203

Don't just use it blindly because it's newer. It's an alternative to DX11, _not_ a replacement for it.

Never really thought about it that way. Seeing that their haven't been many games using DirectX 12 yet, what kind've performance increases (or maybe decreases) will there be on a well coded & optimized DirectX 12 engine compared to a DirectX 11 engine? Is it really that beneficial?

Share this post


Link to post
Share on other sites
iedoc    2525

Don't just use it blindly because it's newer. It's an alternative to DX11, _not_ a replacement for it.

Never really thought about it that way. Seeing that their haven't been many games using DirectX 12 yet, what kind've performance increases (or maybe decreases) will there be on a well coded & optimized DirectX 12 engine compared to a DirectX 11 engine? Is it really that beneficial?
If you do it right, you can really get a lot of performance out of dx12 compared to dx11. If you have 3dmark, try using the api comparison runs for dx11 and dx12.the run is as many draw calls per frame before fps drops below 30. On my hardware, ogl and dx11 got almost 50k draw calls. Dx12, no joke, got over 500k draw calls per frame.

I like what hodgman said, how its too bad dx12 is so scary, since it really is a small api with around 200 api calls total, with so much potential. The problem with dx12 and why its more difficult to use than dx11, is that its not just about knowing the api anymore, its all about architecture, and making your own assumptions about your application. This is where the huge potential for performance comes from in dx12, dx11 made a lot of assumptions and did almost everything for you, most prominently the memory management.

Just something to think about, you could basically make dx11 using dx12. Dx11 can almost be looked at as a wrapper for dx12

I think the name dx12 is a little misleading as others have said, its not really bringing anything new to the table, but rather giving you much more control over the graphics hardware

Share this post


Link to post
Share on other sites
Radikalizm    4807

Just something to think about, you could basically make dx11 using dx12. Dx11 can almost be looked at as a wrapper for dx12

 

Microsoft actually did this as a porting aid for bringing DirectX 11 applications to DirectX 12; it was called DirectX 11on12. We briefly evaluated it as a tool for bringing a title over to DirectX 12 when it was still in its EAP stage. I don't know if it was kept up-to-date or not; to be honest with you it doesn't seem like the right way to approach a DirectX 12 port in retrospect.

 

I think the name dx12 is a little misleading as others have said, its not really bringing anything new to the table, but rather giving you much more control over the graphics hardware

 

Completely agreed, it seems to be creating a lot of confusion especially in hobbyist scenes where people now feel the need to move to this new API thinking that DirectX 11 is deprecated and outdated. It's really really not! I can't stress this enough.

 

<rant>

 

I think this is also a problem with very low level software technologies like these being exposed to the gaming community and them being seen as must-haves to be competitive in the gaming market, whether your game will benefit from them or not. DirectX 12 and Vulkan are the new edgy buzzwords which gaming enthusiasts can use to judge and compare games. There is this idea that having your game run on DirectX 12 will automatically make it faster and more graphically impressive, because 12 is a larger number than 11 and therefore it must be better.

 

I have lived and breathed DirectX 12 for a good year now working on bringing an existing DirectX 11 engine over to DirectX 12 (because of reasons), and it has been an incredibly challenging endeavor. Only since a short little while have we been able to actually get more out of 12 than we could get out of 11 within the architecture that was in place, and that was with a team of talented and experienced engineers.

 

I am all for people broadening their horizons and teaching themselves how to work with this genuinely exciting new tech, but for the love of all that is good, if you're actually trying to ship something on your own tech within a reasonable time frame and without an experienced team working on this full time you're just so much better off sticking with 11. Building a 12-based engine just takes up so many of your engineering resources while architecting, implementing and debugging your engine (and debugging in 12 can be ruthless) that it's just not worth the trouble if you don't have a very good reason to go for 12 in the first place.

 

AAA developers who know they can bring GPU drivers to their knees in DirectX 11 or companies doing heavy GPU simulations with lots of data throughput know up front they can benefit from 12, so it makes sense for them to use it if it helps them in the long run. It's exactly these companies who reached out to hardware vendors and companies like Microsoft to state that they would be interested in such an API, eventually resulting in the development of Mantle and subsequently DX12 and Vulkan. As an indie developer or hobbyist it seems very unlikely to me that you'd ever need or benefit from something like 12.

 

 

I know I can sound like a grump and a broken record by writing all of these DirectX 12 posts, and I also know that there's plenty of people who think I'm wrong and who don't want to hear this stuff, but I just really want to make the point that you don't have to use 12, and that it's usually a better idea to stick with 11 if you want to actually build cool and exciting graphics techniques. If there's anything I like to see from a community like this it's people building cool new crazy exciting graphics stuff, and I feel like 11 will get you there much faster than 12 will.

 

</rant>

Share this post


Link to post
Share on other sites
SeanMiddleditch    17565

hing to think about, you could basically make dx11 using dx12. Dx11 can almost be looked at as a wrapper for dx12


Which is called D3D11On12. :P

https://msdn.microsoft.com/en-us/library/windows/desktop/dn913195(v=vs.85).aspx

I don't know it's just a dumb wrapper or if it is actually an optimized implementation, though.

Share this post


Link to post
Share on other sites
Mona2000    1967

Which is called D3D11On12. :P

https://msdn.microsoft.com/en-us/library/windows/desktop/dn913195(v=vs.85).aspx

I don't know it's just a dumb wrapper or if it is actually an optimized implementation, though.

It's a dumb wrapper meant for using D3D11-compatible libraries in a D3D12 application. It can't really compete with the D3D11 driver optimizations.

Share this post


Link to post
Share on other sites
SoldierOfLight    2150

Which is called D3D11On12. :Phttps://msdn.microsoft.com/en-us/library/windows/desktop/dn913195(v=vs.85).aspxI don't know it's just a dumb wrapper or if it is actually an optimized implementation, though.

It's a dumb wrapper meant for using D3D11-compatible libraries in a D3D12 application. It can't really compete with the D3D11 driver optimizations.

D3D11On12 author here. It's not optimized yet, but should be able to compete with some DX11 driver implementations before long.

Share this post


Link to post
Share on other sites
Mona2000    1967

Are we talking Intel implementations or NVIDIA implementations?  :P

Impressive nonetheless, I had no idea it was that advanced..

Share this post


Link to post
Share on other sites
SeanMiddleditch    17565
Also, for anyone looking for "easy mode" DX12, at least look at DirectXTK12, which is a DX12 port of the 11-oriented https://github.com/Microsoft/DirectXTK. Unfortunate that they're separate projects since so much code is shared, but it is what it is.

The nice bit is that you can compare the implementation of common types like SpriteBatch to really get a feel for what the API and conceptual differences are between DX11 and DX12.

I don't think there's any particular advantage to using DirectXTK12 rather than just using DirectXTK(11) that I know of. The DXTK12 is not doing all the optimizations necessary to fully make up for the lack of the DX11 driver magic that you want in a "real" graphics engine, but it does go a fair bit further than most of the sample/demo code I've seen to date.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By mark_braga
      I have a compute shader which writes to an RWStructuredBuffer and later a pixel shader uses it as SRV for reading (StructuredBuffer).
      For this I put a barrier to transition the buffer to SRV and back to UAV after the pixel shader is done. I am trying to minimize number of barriers so one option is to just use RWStructuredBuffer even in the pixel shader even if the shader only reads from it.
      So my question is, does RWStructuredBuffer come with a hidden cost for reading which is greater than the cost of the two barriers?
       
    • By AxeGuywithanAxe
      I wanted to get a gauge on how everyone is handling resource descriptors and bind slots in the new API. Under directx11 I would allow the compiler to generate its own bind slots, and look it up at runtime. I wanted to see if people using directx12/Villanova have switched to explicit slots for easier descriptor set management, or are still using the prior version, and burning through descriptors
    • By Inline Engine
      Hello Everybody,

      Our professional experienced hobby team (5 active members) is looking for Volunteers to take a seat in the development of a next generation C++ 3D Game Engine.

      The minimum requirement is to have passion about games/game engine programming.

      The Engine:

      - Cross Platform & Clean Code Design ( For now we only want to aim PC & Console )

      - Fully Customizable Graph Based DirectX12 Graphics Engine, (PBR is in progress)

      - Own Editor + UI system + Math library

      - (Will be) multithreaded with Job Based System like in Uncharted 4's engine

      Roles we are looking for now: (Later will be more)
      - Editor & UI & Tools Programmer

      - Generalist Programmer

      Picture from the current early state of the editor:
      https://pasteboard.co/eb4hfgGM3.png

      Source code: https://github.com/petiaccja/Inline-Engine

      If you have the passion to build game engines / games write an e - mail to: InlineEngine@gmail.com
    • By mark_braga
      I am working on making a DX12, Vulkan framework run on CPU and GPU in parallel.
      Decided to finish the Vulkan implementation before DX12. (Eat the veggies before having the steak XDD)
      I have a few questions about the usage of ID3D12CommandAllocator:
      Different sized command lists should use different allocators so the allocators dont grow to worst size Does this mean that I need to know the size of the command list before calling CreateCommandList and pass the appropriate allocator? Try to keep number of allocators to a minimum What are the pitfalls if I create a command allocator per list? This way each allocator will never grow too large for the list. In addition, there will be no need for synchronization. Most of the examples I have seen just use a pool of allocators and do fence based synchronization. I can modify that to also consider command list size but before that any advice on this will really help me to understand the internal workings of the ID3D12CommandAllocator in a better way.
    • By _void_
      Hello guys,
      I would like to use MinMax filtering (D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT) in compute shader.
      I am trying to compile compute shader (cs_5_0) and encounter an error: "error X4532: cannot map expression to cs_5_0 instruction set".
      I tried to compile the shader in cs_6_0 mode and got "unrecognized compiler target cs_6_0". I do not really understand the error as cs_6_0 is supposed to be supported.
      According to MSDN, D3D12_FILTER_MAXIMUM_MIN_MAG_MIP_POINT should "Fetch the same set of texels as D3D12_FILTER_MIN_MAG_MIP_POINT and instead of filtering them return the maximum of the texels. Texels that are weighted 0 during filtering aren't counted towards the maximum. You can query support for this filter type from the MinMaxFiltering member in the D3D11_FEATURE_DATA_D3D11_OPTIONS1 structure".
      Not sure if this is valid documentation as it is talking about Direct3D 11. D3D12_FEATURE_DATA_D3D12_OPTIONS does not seem to provide this kind of check.
      Direct3D device is created with feature level D3D_FEATURE_LEVEL_12_0 and I am using VS 2015.
      Thanks!
  • Popular Now