• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 GPU support for D3D12_CROSS_NODE_SHARING_TIER

This topic is 996 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm looking to start a new side project that'll leverage the new node sharing capabilities of DX12. I came across this in the documentation: https://msdn.microsoft.com/en-us/library/windows/desktop/dn914408(v=vs.85).aspx.  I tried to do some Google research to see which GPU architectures support D3D12_CROSS_NODE_SHARING_TIER_2 but came up empty handed.  Is this feature even supported by current GPUs?

 

Thanks.

Share this post


Link to post
Share on other sites
Advertisement

Well... GPU manufacturers tend to be sketchy. "Fully" supports can mean supporting only the lower tiers. I'd say that a GPU that is D3D12_CROSS_NODE_SHARING_TIER_1_EMULATED may still claim DirectX 12 support. It'd be nice to get a clear list of supported tier levels for the D3D12_FEATURE_DATA_D3D12_OPTIONS structure (https://msdn.microsoft.com/en-us/library/windows/desktop/Dn770364(v=VS.85).aspx). I've made the mistake in the past of buying a GPU that claimed full support of DX11.2 only to get gypped when seeing the features I wanted were only supported at higher tier levels.

Share this post


Link to post
Share on other sites

Shouldn't every GPU that fully supports DirectX 12 support all of its features?

I want a pony as well. GPUs aren't like CPUs which are all the same.

GPUs are extremely different, and some of them have superior architectures than others, some are better at doing certain tasks, other are better at other tasks.

Specially when you want existing GPU hardware to be able to run DX12 right now.

If you don't like that, then you can get out of graphics development in games because this heterogeneity has been driving innovation for the last 2 decades.

 

To the OP:

There is a chart with tiers based by GPU.

Don't be fooled by them though. A tier 3 GPU may be tier 3 because it doesn't support X & Y features, but turns out if it weren't by them, it would be considered Tier 1 (e.g. it may have features or precision that not even tier 2 GPUs have).

Tiers only guarantee a minimum, not a maximum. You should watch out for the capabilities you can query via the D3D12 API.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

I guess so, DirectX 12 is still considered an unfinished product by Microsoft.

DX12 is finished, but drivers are not perfect yet. New drivers may enable features that are not available yet, not to mention that there are a bunch of issues with current drivers (at least I have issues with nvidia and intel, but different issues, didn't try on AMD in a while).

However, Direct3D, in general, always have features that are optional (the exception was DX10 and even then there was some formats that were optional). Also, there is no GPU in the market that fully support all features for DX12, but I don't see this as a big issue for the near future.

 

Share this post


Link to post
Share on other sites
We can expect some little presentation mode fixes/changes, as the video above states (~November 2015) and probably ASTC support in the near future (Windows 10 mobile?), but yes, DirectX 12 has been finalized.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement