• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 Directx 11 For Small Scale And Directx 12 For Large Scale.

This topic is 631 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I recently read a few posts on DirectX 11 vs 12 and based on everything I've read, I've come to the conclusion that DirectX 11 is good for mostly all tasks and you should only use DirectX 12 to get more control over optimizations if you think you'll benefit from it, eg if your doing a large scale game. 

What are your thoughts guys? Am I looking at this completely wrong? Should we all just use DirectX 12 and forget about 11?

Share this post


Link to post
Share on other sites
Advertisement

DirectX11 is still good and state of the art.

Also for beginneres I would recommend D3D11 because ists much easier.

 

DirectX12 and Vulkan you should only use if you are an experienced graphics programmer how knows what he's doing.

 

I believe 80% of Hobby or Indie Game Projects won't ever come to a stage where they really benefit from the advantages of DX12 or Vulkan.

Even AAA Titels like Rise of the Tomb Raider don't get a real performance boost if you have a fairly modern CPU working.

The optimazations of these APIs take the most effect if your game is CPU limited because of the lower driver head.

Edited by mgubisch

Share this post


Link to post
Share on other sites

I posted on this topic just the other day. But I think the decision of whether to use DX11 or DX12 is mostly a question of whether your user's computers can support DX12.

 

I mean, yes, DX12 is more difficult than DX11. DX11 is more difficult than DX10. DX10 is more difficult than DX9 and OpenGL. OpenGL is more difficult than MonoGame. MonoGame is more difficult than Unity. If you want easy: Unity. But DX11 is so difficult I wouldn't say it's really "easier" than DX12. Depending on where you're coming from knowledge wise, DX11 is an enormous learning curve. I say this never having written a DX12 program and only having compiled a Vulkan tutorial. But I think saying DX12 is more difficult than DX11 is like saying it's more difficult to walk to New York from LA than it is to walk from LA to Boston. By the time you've managed to walk to Boston, New York isn't that much further. You've already made it to Boston. If New York is where you really want to be then why not just keep going?

 

I think there's more opportunity to mess up in a really big way and have your game crash in DX12. (But then again you were working with unmanaged code and COM in DX11.) Multi tasking always scares people. So, it's definitely more difficult. But by the time you learn HLSL just so you can draw something to the screen, figured out how to deal with the Windows OS (at least so that you can get a process and control the window you are running in), and dealt with COM, "you've come a long ways baby" (and that doesn't even get into writing your own Python scripts to extract modeling data from Blender, writing your own modeling class, learning Win Sock for Internet game play, etc.). If you can handle that, I figure multi-tasking can't be that much more difficult.

 

From what I read, DX12 handles resources far better than DX11, but it means extra steps on your part to take responsibility for those resources. But that's pretty much the same difference as DX9 and DX10/11.

 

But as DX becomes more complicated, it's a shame it becomes more difficult to learn to do. Then again, I'm not sure DX has ever been easy. I tried for the better part of a decade to learn 3D programming in DX9 with no success to speak of. By the time I was actually ready, I ended up teaching myself DX11. And I found it reasonably easy. But that was only because I had over a decade of experience elsewhere. So, maybe it is good to learn in stepping stones.

 

Overall though, it seems to me that if you are going to learn DX12, you should probably use it for everything unless a more simple tool is called for. But if a more simple tool is called for DX11 is probably too much too. For example, I needed a tool to read my model files to examine the data in a human readable format so I wrote a C# program. I might prototype something in Unity or what I actually use is XNA for prototyping. Several things that I was worried about tackling directly in DX11 I prototyped in XNA first.

 

Again though, I've never written a single DX12 program. The closest I've come to it is program in DX11 and spending a Saturday compiling a Vulkan tutorial. I've also flipped through Frank Luna's DX12 book and found it to be remarkably similar to his DX11 book.

 

I would think that if you're going to do DX12, just do it and stick with it. At that point you've already dealt with the difficult part and you probably need the practice anyway even if it is a smaller project.

 

Another way of looking at it though is that most of your beginner projects probably would run just fine on DX9. I used XNA for years and it was built on top of DX9. Any time I ran into any type of performance issue it was because of the way I had coded it, not a real problem of exceeding it's limitations. So, you might learn DX11 before DX12 just because there's so much more information out there to help you learn DX11 and it's a pretty decent stepping stone to DX12.

Edited by BBeck

Share this post


Link to post
Share on other sites

@BBeck:

 

I don't really agree with you. The only thing: If you want it really simple take Unity or Unreal Engine.

 

DX11 hides a lot of work for you, which you need to take care of yourself in DX12 or Vulkan. If you are fresh to any Graphics API I believe DX11 or OpenGL are the far better choises instead of Vulkan/DX12. If you already know the other APIs and know where your bottlenecks are and why the are there, the new APIs will be the right choice. Otherwise if never touched an API like DX or OpenGL someone should stick with the much simpler and well documented Dx11 or OpenGL APIs until there's a fair understanding of how the graphics card and all the stuff is working.

 

If someone starts programming usally no one recommends to start with assembler to make a simple text based console game.

 

Regarding Hardware support: If you set the desired feature level all DX11 HW can run your DX12 Programms and you get still some of the advantages of DX12.

Share this post


Link to post
Share on other sites

Titels like Rise of the Tomb Raider don't get a real performance boost if you have a fairly modern CPU working.
Yeah I kinda noticed that while playing, but it's anyway an unoptimized title. Your also kinda right, because if you've played the new star wars battlefront by DICE powered by the frostbite engine, that's on DX11 and is probably the most optimized title I've played. With the studios next title Battlefield 1 (which runs on either DX11 or DX12, whatever you choose), which in my opinion is the best looking game should compare the performance difference between the two API's and see if there is a good performance increase or not.  

Share this post


Link to post
Share on other sites

Don't just use it blindly because it's newer. It's an alternative to DX11, _not_ a replacement for it.

Never really thought about it that way. Seeing that their haven't been many games using DirectX 12 yet, what kind've performance increases (or maybe decreases) will there be on a well coded & optimized DirectX 12 engine compared to a DirectX 11 engine? Is it really that beneficial?

Share this post


Link to post
Share on other sites

Don't just use it blindly because it's newer. It's an alternative to DX11, _not_ a replacement for it.

Never really thought about it that way. Seeing that their haven't been many games using DirectX 12 yet, what kind've performance increases (or maybe decreases) will there be on a well coded & optimized DirectX 12 engine compared to a DirectX 11 engine? Is it really that beneficial?
If you do it right, you can really get a lot of performance out of dx12 compared to dx11. If you have 3dmark, try using the api comparison runs for dx11 and dx12.the run is as many draw calls per frame before fps drops below 30. On my hardware, ogl and dx11 got almost 50k draw calls. Dx12, no joke, got over 500k draw calls per frame.

I like what hodgman said, how its too bad dx12 is so scary, since it really is a small api with around 200 api calls total, with so much potential. The problem with dx12 and why its more difficult to use than dx11, is that its not just about knowing the api anymore, its all about architecture, and making your own assumptions about your application. This is where the huge potential for performance comes from in dx12, dx11 made a lot of assumptions and did almost everything for you, most prominently the memory management.

Just something to think about, you could basically make dx11 using dx12. Dx11 can almost be looked at as a wrapper for dx12

I think the name dx12 is a little misleading as others have said, its not really bringing anything new to the table, but rather giving you much more control over the graphics hardware

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement