Jump to content
  • Advertisement

LowLatencyGuy

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

117 Neutral

About LowLatencyGuy

  • Rank
    Member

Personal Information

  • Interests
    |programmer|

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Bottom line: we had good support from Microsoft over the last weeks but eventually we gave up on the DX12 waitable swap chain approach because it gives one additional frame of latency. See comment Jesse previous post: "it does look like there's an off-by-one in the frame latency waitable object" For our multi-GPU / multi head application we have started testing on DX11. The first results look good.
  2. The problem seems to solved in Windows 10 version 10.0.15043  (insider preview feb 2017) . NVIDIA driver version 378.66
  3.   The 32 ms I was referring to on Windows  7 (full screen / DWM disabled) includes the latency added by the display. So it seems we get one frame additional latency on Windows 10 compared to Windows 7.   Anybody succeeded getting down to one frame latency on using a waitable swap chain on Windows 10??? 
  4. I tried to use the GetFrameStatistics however the struct returned contains zeroes. So the trick with the present stats we use on DX9 does not work on DX12.   Tried  DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL but also DXGI_SWAP_EFFECT_FLIP_DISCARD   I am using "CreateSwapChain" API maybe I should use "CreateSwapChainForHnwd"???   I created a version of the program which has v-sync disabled. It gives tearing but I would like to see what the latency will when measuring with the light sensor. I will pick this up tomorrow when I am back at the office.   PS: what do you mean by "Unfortunately it looks like the waitable object may not be working properly"? A bug in my program or in the OS/Driver??
  5. Hi Jesse,   I have some output from "presentmon", however my own measurements are done use a light sensor (taped to the screen). The last column states 32 ms (MsUntilDisplayed) is that what to expect or should it be in the order of 16 ms?   I have not (yet) worked with gpuview. Is presentmon up to the job or should I invest in gpuview?   Application,ProcessID,SwapChainAddress,Runtime,SyncInterval,AllowsTearing,PresentFlags,PresentMode,Dropped,TimeInSeconds,MsBetweenPresents,MsBetweenDisplayChange,MsInPresentAPI,MsUntilRenderComplete,MsUntilDisplayed Dx12LatencyTest1.exe,312,0x0000000006FB2D40,DXGI,1,0,0,Hardware: Independent Flip,0,0.025946,16.646,16.666,0.143,16.589,32.635 Dx12LatencyTest1.exe,312,0x0000000006FB2D40,DXGI,1,0,0,Hardware: Independent Flip,0,0.042615,16.670,16.666,0.140,16.566,32.631 Dx12LatencyTest1.exe,312,0x0000000006FB2D40,DXGI,1,0,0,Hardware: Independent Flip,0,0.059262,16.647,16.666,0.139,16.614,32.651
  6. hi,   thanks for the quick response. About the SetFrameLatency API, MSDN states:   "Sets the number of frames that the system is allowed to queue for rendering. .... ....... The maximum number of back buffer frames that a driver can queue."   If I have two buffers (a front and back buffer in a full screen swap chain), buffer count on the swap chain set to 2. How does this SetFrameLatency queue related to these buffers?    I am trying to understand were in the present chain we have queues as queues add latency :-)   regards, TF
  7. Hi everyone,   we are investing how to port our application from Windows 7 to Windows 10. The Requirements are: 60 fps, low latency and no tearing on Windows 10 using DX12 Currently we use a DX9ex full screen application on Windows 7 with DWM disabled. We have 2 GPUs which have 2 heads each but for this discussion I want to stick to a single GPU using one head.   Configuration: - Windows 10 version 10.0.14393 - NVIDIA driver 376.63. - NVIDIA K600 graphics card   Our current measurements (using a ligth sensor and a scope) show that we have one vsync off additional latency on Windows 10 / DX12 compared to our DX9 full screen solution. We measure around 50ms (Windows 7 32ms), so one additional frame.   The main question is of course what settings should we use to get the best result (full screen or windows, etc.) Our initial feeling was based on the info we have: a full screen waitable swap chain were we render each time when the waitable object is signaled (so a buffer is free).   Our measurement do not show the results expected, why???   We have been watching the video from Jesse Natalie about flipping modes but still have some questions.:   Q1: In the video Jesse talks about windowed mode and full screen. At @13:20 into video he states: the best option for low latency is a full screen swap chain OR a waitable swap chain. Does this imply that a waitable swap chain is cannot be a full screen swap chain? Or in other words one cannot use a waitable object on a full screen swap chain to check if buffer is free  (and as a result the next present call does not block)   Q2: The video suggest (to my understanding) that there are two queues in a swap chain.   a) the present queue (present blocks when this queue is full). The size of this queue is determined by the buffer count of the swap chain. How does this relate to the SetMaximumFrameLatency setting???   b) the number of frames completed on the GPU that need to be displayed. How can one control this?   I am not sure how this works can anybody explain this is more detail?   Q3: Some of the flip modes are not exposed by the API and the system itself switches between flip, d-flip and i-flip (@26:00). It is important for us to be in control. We do not want the latency to change by some mechanism in the OS. Is it therefore better to use the full screen APIs?   Q4: The video discusses all kinds of flip modes. @35:22 into the video it is mentioned that the system  switches to Windowed immediate iflip when using a DX12 swap chain in full screen. So it there still a difference between a border-less window covering the whole screen and a swap chain set to full screen LATENCY wise?   Any feedback is welcome Regards, TF
  8. Hi, I have stripped down the test app somewhat more and tested it again. I still have a repro, so if you can give it a shot that would be great. I attached the code to this post.   I also tested on an older version of Windows 10 (version 10.0.10240 + NVIDIA driver 368.39). On this configuration the program does NOT work, I get a black screen when using 1 GPU and two heads. When I run it on a 1 GPU / 1 head the program does work. I upgraded the driver to version 376.62 but this did not solve the problem.   On Windows 10 version 10.0.14393 (see initial post) the program does run using 1 or 2 heads but the present API gives a problem.   PS: we are also working on a DX12 test app. We have it working but we have 16 ms additional latency compared to DX9ex. I will create a separate post for this today or tomorrow after some more testing and re-watching your videos on flipping modes :-)   Regards, TF    
  9. Hi Jesse,    I can try on an older version of Windows 10. I will do so first thing tomorrow when I am back at the office. I do not have access to Windows 8(.1). I can provide you the test program so you can take a look.   Note that latency is critical for us because our application is based on hand eye coordination, user looking at the screen while positioning a device. We use 2 GPUs each having 2 heads. I am measuring the latency using a light sensor (similar to what is described in this post).   What we do is we check for each frame if the previous frame is already rendered, if not we skip the frame. Our render loop does this for each display. By preventing frames getting buffered, this way we minimze the latency.    Note we are running full screen. Furthermore on Windows 7 we disabled the compositor (DWM). On Windows 10 it is not possible to disable the DWM, correct? bool DX9Screen::isPresented() const { bool result = false; IDirect3DSwapChain9Ex *swap = nullptr; IDirect3DSwapChain9 *swapChain = nullptr; if (FAILED(m_device->GetSwapChain(m_screen, &swapChain))) throw "error: doRender GetSwapChain failed"; if (FAILED(swapChain->QueryInterface( __uuidof(IDirect3DSwapChain9Ex), (void**)&swap))) throw "error: QueryInterface IDirect3DSwapChain9Ex"; D3DPRESENTSTATS stat; UINT id; HRESULT hr = swap->GetPresentStats(&stat); checkHResult("GetPresentStats", hr); hr = swap->GetLastPresentCount(&id); checkHResult("GetLastPresentCount", hr); //We use present statistics to figure out whether the last frame has actually been sent to the screen.. //This allows us to minimize latency; if we would render while the last frame is still in an internal queue, //we would introduce latency if((stat.PresentCount == id) || (m_lastPresent == 0)) result = true; else if(stat.PresentCount == 0) { trace("WARNING: isPresented stat.PresentCount not valid (or wrapped around) "); result = true; } if(swap) swap->Release(); if(swapChain) swapChain->Release(); return result; } If you need to see more of the code then that is not a problem. I have a relatively small test app which I use to measure.   Thanks in advance TF  
  10. Hi,   we have a DX9ex application that uses multiple GPUs and each GPU has 2 heads (NVIDIA K600). This application runs fine on Windows 7 but we experiencing problems on Windows 10.    The application is rendering to 4 displays. It uses the following DX9ex methods to determine if a frame (for a certain swapchain) has been rendered to the display:     IDirect3DSwapChain9Ex::GetPresentStats(&stat); IDirect3DSwapChain9Ex::GetLastPresentCount(&id);   if((stat.PresentCount == id) ==> we render the next frame to prevent buffering and increasing latency.     The application does runs on Windows 10 one or two GPUs. But there is a problem if we use the second head on one of each of the GPUs. In other words the APIs do work when running without the second head (works when using 2 GPUs each one head). In this situation the results returned by GetPresentStats and GetLastPresentCount don't seem to make sense anymore (note the return values are not zero). The check (stat.PresentCount == id) fails and our rendering loop fails.   Can anybody shed some light on this?     Configuration: - Windows 10 version 10.0.14393 - DX9ex  - NVIDIA driver 376.63. - 2 x NVIDIA K600 graphics cards
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!