• 10
• 10
• 12
• 14
• 15
• Similar Content

• While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
#define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
mov  qword ptr [rdx],rax
which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.

• By lubbe75
As far as I understand there is no real random or noise function in HLSL.
I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?

• Hi,
I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
• By NikiTo
Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.

if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

MSN>
discard: Do not output the result of the current pixel.
<MSN

As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

(what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
• By NikiTo
I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!

DX12 DirectX 12 Multi Threading / Low-latency presentation

This topic is 618 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi,

Does anyone know if it is safe to call IDXGISwapChain3::Present() on one thread, while at the same time calling ID3D12CommandQueue::ExecuteCommandLists() on another thread?

I have an application that creates two windows, one for each monitor. I have two threads that render to these two windows simultaneously. Each thread has its own ID3D12Device, ID3D12CommandQueue, IDXGISwapChain3, everything... they are completely independent.

Yet the application hangs randomly. Below are the call stacks of the two threads when they hang:

ntdll.dll!NtWaitForAlertByThreadId()
ntdll.dll!RtlpWaitOnCriticalSection()
ntdll.dll!RtlpEnterCriticalSectionContended()
D3D12.dll!CCommandQueue<0>::ExecuteCommandLists(unsigned int,struct ID3D12CommandList * const *)
dxgi.dll!CD3D12Device::CloseAndSubmitCommandList(unsigned int,enum CD3D12Device::QueueType)
dxgi.dll!CD3D12Device::PresentExtended(struct DXGI_PRESENTSURFACE const *,struct IDXGIResource * const *,unsigned int,struct IDXGIResource *,void *,unsigned int,unsigned int,int *,unsigned int *)
dxgi.dll!CDXGISwapChain::FlipPresentToDWM(struct SPresentArgs const *,unsigned int,unsigned int,unsigned int &,unsigned int,struct tagRECT const *,struct DXGI_SCROLL_RECT const *,struct DXGI_INTERNAL_CONTENT_PROTECTION const &)
dxgi.dll!CDXGISwapChain::PresentImplCore(struct SPresentArgs const *,unsigned int,unsigned int,unsigned int,struct tagRECT const *,unsigned int,struct DXGI_SCROLL_RECT const *,struct IDXGIResource *,bool &,bool &,bool &)
dxgi.dll!CDXGISwapChain::Present(unsigned int,unsigned int)
MyApp.exe!gui::CDXProc::CJob::Present() Line 963    C++

ntdll.dll!RtlpWaitOnCriticalSection()
ntdll.dll!RtlpEnterCriticalSectionContended()
dxgi.dll!CDXGISwapChain::GetCurrentBackBufferIndex(void)
dxgi.dll!CDXGISwapChain::GetCurrentCommandQueue(struct _GUID const &,void * *)
D3D12.dll!CCommandQueue<0>::ExecuteCommandLists(unsigned int,struct ID3D12CommandList * const *)
MyApp.exe!gui::CDXProc::CJob::ExecuteCommandList(ID3D12CommandList * iCommandList) Line 1172    C++


As you can see, the one thread is stuck in the Present() call, while the other thread is stuck inside ExecuteCommandLists().

I can get around the problem by putting a critical section around all calls to Present() and ExecuteCommandLists(), but I do not understand why this is necessary. Any ideas?

Edit: Changed the thread title to reflect the direction things are going.

Share on other sites
The first argument to CreateSwapChain is your main commandQueue. I guess that Present makes use of this queue internally, which means that no other thread should be using that queue during a Present call.

Share on other sites

The first argument to CreateSwapChain is your main commandQueue. I guess that Present makes use of this queue internally, which means that no other thread should be using that queue during a Present call.

Is ID3D12CommandQueue not free threaded? I thought the general idea is that multiple threads can create multiple ID3D12GraphicsCommandLists in parallel, and submit them in parallel to a single ID3D12CommandQueue?

But anyway, that is not what I am doing. I created two separate command ID3D12CommandQueues. Or are they perhaps one and the same internally?

Share on other sites

Is ID3D12CommandQueue not free threaded? I thought the general idea is that multiple threads can create multiple ID3D12GraphicsCommandLists in parallel, and submit them in parallel to a single ID3D12CommandQueue?

You're right: "Any thread may submit a command list to any command queue at any time, and the runtime will automatically serialize submission of the command list in the command queue while preserving the submission order." -- that sounds like the queue has an internal mutex that's acquired for you...

Perhaps there's a bug and Present fails to acquire this mutex? Hopefully someone with deeper knowledge of D3D12 can shed light on this...

Share on other sites

That particular deadlock was discovered and fixed a while back, if I remember correctly. Make sure you're on the latest version of Windows 10.

Share on other sites

That particular deadlock was discovered and fixed a while back, if I remember correctly. Make sure you're on the latest version of Windows 10.

I think this is something else - I am on Build 10586.494. Windows Update says: "Your device is up to date. Last checked: ?2016/?08/?09, ??00:35"

I narrowed the deadlock down to a ResourceBarrier I have that straddles VSync. I set a barrier from PRESENT to RENDER_TARGET directly *after* Present(), followed by a Signal()+SetEventOnCompletion()+WaitForSingleObject().

This is the only way I have been able to achieve "Direct Flip" latency. The usual method of waiting on a WAITABLE_OBJECT after Present() does not seem to work, because it does not matter if the window covers only a portion of the monitor or if it covers the entire monitor, I always get the same latency of about 34ms:
(The picture below is from an oscilloscope that I trigger when I start to render a new frame, and then measure how long it takes to see the change on the screen as picked up by a photo diode that I taped to the monitor.)

If instead I remove the wait on the WAITABLE_OBJECT, and replace it with a wait on resource barrier, I get the expected behaviour of "Direct Flip" with latency going down to 18ms (a 16ms reduction) for the case where the window covers the entire screen:

This worked, but is now causing the deadlock when I do the same thing on two screens simultaneously. I suppose I could also go back to using waitable objects, but then won't get the lower latency of "Direct Flip". Is there some other way of doing the timing?

Share on other sites

Well, the Anniversary update was just released as 14393, so I'd recommend giving that one a shot first to see what's going on.

You can also try out PresentMon as a software technique for measuring latency. It'll also tell you whether you're in independent flip or getting composed. You might just be using the waitable object incorrectly while trying to get low latency, but that is absolutely our recommended way of controlling your latency, even in D3D12. For example, are you waiting on the object before your first frame? If not, you'll end up with latency that's one frame higher than you'd want, even in independent flip mode.

Share on other sites

Well, the Anniversary update was just released as 14393, so I'd recommend giving that one a shot first to see what's going on.

You can also try out PresentMon as a software technique for measuring latency. It'll also tell you whether you're in independent flip or getting composed. You might just be using the waitable object incorrectly while trying to get low latency, but that is absolutely our recommended way of controlling your latency, even in D3D12. For example, are you waiting on the object before your first frame? If not, you'll end up with latency that's one frame higher than you'd want, even in independent flip mode.

According to PresentMon everything is fine, but in reality (when measuring the light coming out of the screen), all is not as it seems. I did the following four tests:
Test 1: Using a "waitable object" on a non-fullscreen window.
Test 2: Using a "waitable object" on a fullscreen window.
Test 3: Using a "wait on barrier" on a non-fullscreen window.
Test 4: Using a "wait on barrier" on a fullscreen window.

Below is the output from PresentMon: (I added a column on the far right with the actual latency as measured using an oscilloscope.)

      Runtime SyncInterval AllowsTearing PresentFlags PresentMode                Dropped TimeInSeconds MsBetweenPresents MsBetweenDisplayChange MsInPresentAPI MsUntilRenderComplete MsUntilDisplayed Measured Latency
------- ------------ ------------- ------------ -----------                ------- ------------- ----------------- ---------------------- -------------- --------------------- ---------------- ----------------
Test 1:
DXGI    1            0             64           Composed: Flip             0       4.134419      16.581            16.756                 0.488          0.429                 32.617           35
DXGI    1            0             64           Composed: Flip             0       4.151078      16.659            16.605                 0.506          0.5                   32.563           35
DXGI    1            0             64           Composed: Flip             0       4.167767      16.689            16.673                 0.39           0.512                 32.547           35
Test 2:
DXGI    1            0             64           Hardware: Independent Flip 0       4.396671      16.611            16.717                 0.466          0.426                 16.311           35
DXGI    1            0             64           Hardware: Independent Flip 0       4.413443      16.772            16.648                 0.382          0.396                 16.187           35
DXGI    1            0             64           Hardware: Independent Flip 0       4.430011      16.568            16.734                 0.397          0.41                  16.353           35
Test 3:
DXGI    1            0             64           Composed: Flip             0       2.242991      16.301            16.689                 0.371          0.431                 32.319           35
DXGI    1            0             64           Composed: Flip             0       2.259456      16.465            16.67                  0.376          0.347                 32.524           35
DXGI    1            0             64           Composed: Flip             0       2.276224      16.768            16.694                 0.359          0.434                 32.45            35
Test 4:
DXGI    1            0             64           Hardware: Independent Flip 0       3.005195      16.478            16.696                 0.43           0.447                 15.927           19
DXGI    1            0             64           Hardware: Independent Flip 0       3.021999      16.804            16.679                 0.387          0.394                 15.802           19
DXGI    1            0             64           Hardware: Independent Flip 0       3.038641      16.642            16.72                  0.383          0.391                 15.88            19


Note that in Test 2 (ie using a waitable object on a fullscreen window) there is a 16ms discrepancy between what PresentMon says the latency is and what is measured in hardware.

It might be that I am doing something wrong with the waitable object - I will keep looking... I will also try the Windows 10 upgrade.

Share on other sites

Well, the Anniversary update was just released as 14393, so I'd recommend giving that one a shot first to see what's going on.

You can also try out PresentMon as a software technique for measuring latency. It'll also tell you whether you're in independent flip or getting composed. You might just be using the waitable object incorrectly while trying to get low latency, but that is absolutely our recommended way of controlling your latency, even in D3D12. For example, are you waiting on the object before your first frame? If not, you'll end up with latency that's one frame higher than you'd want, even in independent flip mode.

Sorry, one more thing... I watched your "Presentation Modes" video about 101 times but still don't understand the difference between "Independent Flip" and "True Immediate Independent Flip". Can you perhaps point me to some additional information on the "True Immediate Independent Flip" mode, and under what conditions it becomes active?

Also you mentioned in the video that with a DXGI_SWAP_EFFECT_FLIP_DISCARD backbuffer DXGI will render things like the volume control directly on my backbuffer while staying in Independent Flip mode. I have not been able to reproduce this behaviour - as soon as the volume control comes up I can see that DXGI is adding in an extra frame of latency... or am I missing something?

Share on other sites

True immediate independent flip is engaged either by calling SetFullscreenState with TRUE (Win32 only, not recommended), or using the new DXGI_SWAP_CHAIN_FLAG_ALLOW_TEARING and DXGI_PRESENT_ALLOW_TEARING. When independent flip is entered and sync interval is 0, the flip will happen as soon as rendering is complete.

The FLIP_SEQUENTIAL and FLIP_DISCARD swap effects allow seamless transitions between independent flip and composition. It is also possible that on systems with hardware composition support (e.g. multiple hardware overlay planes) that things like the volume controls can be rendered without dropping back to software composition and adding back the latency.

The PresentMon data looks like what I'd expect. Are you sure that case 2 has data that looks like that at the same time as your monitor latency test? Note that it's possible that independent flip didn't properly engage 100% of the time, but if your results were consistent then it's probably not that.