Jump to content
  • Advertisement
lubbe75

DX12 Problem rendering to multiple windows (SharpDX / DX12)

Recommended Posts

I am having a problem rendering to multiple (two) windows, using SharpDX and DX12. I am setting up two swapchains etc, and it's almost working. The effect is that both windows shows sporadic flickering. It looks as though one window's transformation leaks into the other window every now and then (it goes both ways but not necessarily at the same time). The rendering is triggered by user input, not a render loop, and the flickering happens mostly when there is lots of input (like panning quickly). When I deliberately slow things down (for instance by printing debug info to Output) it looks fine.

At the moment I have this level of separation:

The application has: one device, bundles, all the resources like vertex, texture and constant buffers.

Each view has: a swap chain, render target, viewport, fence, command list, command allocator and command queue, 

(I have also tried to use a single command list, allocator, queue and fence, but it doesn't make any difference regarding my flicker problem)

The rendering process is quite straight forward:

One of the windows requests to be re-rendered, and the other window's requests will be ignored until the first is done

  1. Update the transformation matrix, based on the window's parameters (size, center position etc). The matrix is written to the constant buffer: 
  2. IntPtr pointer = constantBuffer.Map(0);
    Utilities.Write<Transform>(pointer, ref transform);
    constantBuffer.Unmap(0);

     Reset, populate and close the window's command list (populating here means setting its render target & viewport, executing bundles etc).

  3. Execute the window's command list on its command queue.

  4. Present the window's swapchain

  5. Wait for the window's command queue to finish rendering (using its fence)

  6. Reset the window's command allocator

I really believe that since both windows use the same constant buffer to write and read the transformation, sometimes the first window's transformation is still there when the other window is rendering. But I don't understand why. The writing to the buffer really happens when the code in step 1 is executed... Right? And the reading of the buffer really happens when the command list is executed in step (3)... Right? At very least it must be read before reaching step (6). Am I right? Then how can it sometimes read the wrong transformation?

Has anyone tripped over a similar problem before? What was the cause and how did you fix it?

 

 

Share this post


Link to post
Share on other sites
Advertisement

I think I finally got it working. My mistake lies somewhere in step (5). I always mess up the waiting for the fence part. Now I went back to the simple one allocator (per window) instead of two, following the hello world examples... and it works. Goes to show that I still don't fully understand the routine for working with two allocators.

Anyway, I still wonder what makes most sense when rendering to multiple windows. Should each window have its own allocator, queue and command list, or should these things be central to the application? Porting a hello world example to multiple windows I guess it's easier to keep all those things per view.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By korben_4_leeloo
      Hi.
      I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash.
      All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack:
      https://drive.google.com/open?id=10pdbqYEeRTZA5E6Jm7U5Dobpn-KE9uOg
      The crash is an access violation to a null pointer ( with an offset of 0x80 )
      I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0. 
      I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...).
      I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )...
      I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under 
      Visual Studio 2017 Community.
      Thanks to anybody who can help me find the problem...
    • By _void_
      Hi guys!
      In a lot of samples found in the internet, people when initialize D3D12_SHADER_RESOURCE_VIEW_DESC with resource array size 1 would normallay set its dimension as Texture2D. If the array size is greater than 1, then they would use dimension as Texture2DArray, for an example.
      If I declare in the shader SRV as Texture2DArray but create SRV as Texture2D (array has only 1 texture) following the same principle as above, would this be OK? I guess, this should work as long as I am using array index 0 to access my texture?
      Thanks!
    • By _void_
      Hey!
       
      What is the recommended upper count for commands to record in the command list bundle?
      According to MSDN it is supposed to be a small number but do not elaborate on the actual number.
      I am thinking if I should pre-record commands in the command buffer and use ExecuteIndirect or maybe bundles instead.
      The number of commands to record in my case could vary greatly. 
       
      Thanks!
    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!