• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lubbe75
      What is the best practice when you want to draw a surface (for instance a triangle strip) with a uniform color?
      At the moment I send vertices to the shader, where each vertice has both position and color information. Since all vertices for that triangle strip have the same color I thought I could reduce memory use by sending the color separate somehow. A vertex could then be represented by three floats instead of seven (xyz instead of xys + rgba).
      Does it make sense? What's the best practice?
    • By ZachBethel
      Hey all,
      I'm trying to understand implicit state promotion for directx 12 as well as its intended use case. https://msdn.microsoft.com/en-us/library/windows/desktop/dn899226(v=vs.85).aspx#implicit_state_transitions
      I'm attempting to utilize copy queues and finding that there's a lot of book-keeping I need to do to first "pre-transition" from my Graphics / Compute Read-Only state (P-SRV | NP-SRV) to Common, Common to Copy Dest, perform the copy on the copy command list, transition back to common, and then find another graphics command list to do the final Common -> (P-SRV | NP-SRV) again.
      With state promotion, it would seem that I can 'nix the Common -> Copy Dest, Copy Dest -> Common bits on the copy queue easily enough, but I'm curious whether I could just keep all of my "read-only" buffers and images in the common state and effectively not perform any barriers at all.
      This seems to be encouraged by the docs, but I'm not sure I fully understand the implications. Does this sound right?
    • By NikiTo
      I need to share heap between RTV and Stencil. I need to render to a texture and without copying it(only changing the barriers, etc) to be able to use that texture as stencil. without copying nothing around. But the creating of the placed resource fails. I think it could be because of the D3D12_RESOURCE_DESC has 8_UINT format, but D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL enabled too, and MSDN says Stencil does not support that format. Is the format the problem? And if the format is the problem, what format I have to use?

      For the texture of that resource I have the flags like: "D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET | D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL" and it fails, but when I remove the allow-stencil flag, it works.
    • By ritzmax72
      I know vertex buffer is just another GPU resource represented by ID3D12Resource, but why is it said that vertex buffer don’t need a descriptor heap??
      Other resources like depth/stencil resource, swap chain’s buffer need to have descriptor heaps. How does these resources differ from vertex buffer.
    • By DiligentDev
      I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub.
      True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported:
      Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics
      The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode:
      #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ...  GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources
      Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer:
      BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example:
      TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State
      Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.)
      Creating Shaders
      To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member:
      SHADER_SOURCE_LANGUAGE_DEFAULT  - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL  - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL  - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables:
      Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine.
      The following is an example of shader initialization:
      ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] =  {     {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC},     {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE},     {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object
      To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format:
      // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below:
      // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO:
      m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources
      Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object:
      PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state:
      m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object:
      m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details.
      Setting the Pipeline State and Invoking Draw Command
      Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context:
      // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context:
      m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example:
      DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples
      The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage.
      Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API.   Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers.   Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader.   Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy.   Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance.   Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads.   Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe.   Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm.   Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes.
      AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface.

      Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. 

      The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. 

      Integration with Unity
      Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.

  • Advertisement
  • Advertisement

DX12 DX12 and threading

Recommended Posts

Being new to DirectX 12 I am looking for examples on how to use threading. I have done lots of OpenGL in the past and some DirectX, but with DX12 the threading magic is gone and I understand that threading is crucial to get good performance. In my project I currently have one thread doing it all. I have one command list, one command allocator, one bundle and one bundle allocator. I also have a million triangles, so it's about time that I start doing this.

How do I split things up? How many threads should I use? How many command lists and allocators?

I realize this is a beginner's question , but I have to begin somewhere. I would be grateful if someone could point me in a direction where I could find a simple code sample, tutorial or something similar. Thanks!

Share this post

Link to post
Share on other sites
5 hours ago, lubbe75 said:

I also have a million triangles, so it's about time that I start doing this.

Number of triangles is irrelevant to the CPU - how many draw calls do you have? If it's thousands, you may get some benefit from using multiple threads to record the draw commands. In my experience, with less than around a thousand draws, there's not much benefit in threaded draw submission. 

5 hours ago, lubbe75 said:

How many threads should I use?

Most engines these days make a pool of one thread per CPU core, and then split all of their workloads up amongst that pool. So on a quad core, I'd use a max of 4 threads, and as above, also no more than around (draws/1000)+1 threads. 

Share this post

Link to post
Share on other sites

We have a work-stealing task scheduler that spawns 1 thread for every core on the CPU (minus 1 for the main thread). Then we create a bunch of tasks for groups of draw calls, and throw them a the task scheduler. We've tried both 1 thread per logical core (Intel CPU's with hyperthreading have 2 logical cores for every physical core) as well as 1 thread per physical core, and we've generally found that trying to run our task scheduler thread on both logical cores to be somewhat counterproductive. But your mileage may vary. AMD has some code here that can show you how to query the relevant CPU information,

Writing your own task scheduler can be quite a bit of work (especially fixing all of the bugs!), but it can also be very educational. There's a pretty good series of articles here that can get you started. There's also third-party libraries like Intel's Thread Building Blocks (which is very comprehensive, but also a bit complex and very heavyweight), or Doug Bink's enkiTS (which is simple and lightweight, but doesn't have fancier high-level features). Windows also has a built-in thead pool API, but I've never used it myself so I can't really vouch for its effectiveness in a game engine scenario.

My general advice for starting on multithreading programming is to carefully plan out which data will be touched by each separate task. IMO the easiest (and fastest!) way to have multiple threads work effectively is to make sure that they never touch the same data, or at least do so as infrequently as possible. If you have lots of shared things it can messy, slow, and error-prone very quickly if you have to manually wrap things in critical sections. Also keep in mind that *reading* data from multiple threads is generally fine, and it's *writing* to the same data that usually gets you in trouble. So it can help to figure out exactly which data is immutable during a particular phase of execution, and perhaps also enforce that through judicious use of the "const" keyword.

Share this post

Link to post
Share on other sites

Thanks for the tips and the links! 

After reading a bit more I get the idea that threading is mainly for recording command lists. Is this correct? Would this also include executing command lists?

Before adding threads, will I benefit anything from using multiple command lists, command allocators or command queues?

I have read somewhere that using multiple command allocators can increase performance since I may not have to wait as often before recording the next frame. I guess it's a matter of experimenting with the number of allocators that would be needed in my case.

Would using multiple command lists or multiple command queues have the same effect as using multiple allocators, or will this only make sense with multi-threading? 

I'm currently in a stage where my Dx9 renderer is about 20 times faster than my Dx12 renderer, so I guessing it's mainly multi-threading that is missing. Do you know any other obvious and common beginner mistakes when starting with Dx12?


Share this post

Link to post
Share on other sites

Before messing around with threading, 1 thing you'll want to do is make sure that the CPU and GPU are working in parallel. When starting out with DX12, you'll probably have things set up like this:

Record command list for frame 0 -> submit command list for frame 0 - > wait for GPU to process frame 0 (by waiting on a fence -> Record comand list for frame 1

If you do it this way the GPU will be idle while the CPU is doing work, and the CPU will be idle while the GPU is doing work. To make sure that the CPU and GPU are pipelined (both working at the same time), you need to do it like this:

Record command list for frame 0 -> submit command list for frame 0 -> record command list for frame 1 -> submit command list for frame 1 -> wait for the GPU to finish frame 0 -> record command list for frame 2

With this setup the GPU will effectively be a frame behind the CPU, but your overall throughput (framerate) will be higher since the CPU and GPU will be working concurrently instead of in lockstep. The big catch is that since the CPU is preparing the next frame while the GPU is actively processing commands, you need to be careful not to modify things that the GPU is reading from. This is where the "multiple command allocators" thing comes in: if you switch back and forth between two allocators, you'll always be modifying one command allocator while the GPU is reading from the other one. The same concept applies to things like constant buffers that are written to by the CPU.

Once you've got that working, you can look into splitting things up into multiple command lists that are recorded by multiple threads. Without multiple threads there's no reason to have more than 1 command list unless you're also submitting to multiple queues. Multi-queue is quite complicated, and is definitely an advanced topic. COPY queues are generally useful for initializing resources like textures. COMPUTE queues can be useful for GPU's that support concurrently processing compute commands alongside graphics commands, which can result in higher overall throughput in certain scenarios. They can also be useful for cases where the compute work is completely independent of your graphics work, and therefore doesn't need to be synchronized with your graphics commands.

Share this post

Link to post
Share on other sites
On 12/8/2017 at 5:13 AM, lubbe75 said:

After reading a bit more I get the idea that threading is mainly for recording command lists. Is this correct? Would this also include executing command lists?

Before adding threads, will I benefit anything from using multiple command lists, command allocators or command queues?

Read through this document it should answer your questions.


Share this post

Link to post
Share on other sites

Thanks for that link, Infinisearch!

MJP, I have tried what you suggested, but I got poorer results compared to the straight forward 1-allocator method. Here is what I tried:

After initiating, setting frameIndex to 0 and resetting commandList with allocator 0 I run the following loop (pseudo-code):

populate commandList;
execute commandList;
reset commandList (using allocator[frameIndex]);
present the frame;
frameIndex = swapChain.CurrentBackBufferIndex; // 0 -> 1, 1 -> 0
if (frameIndex == 1) 
    // set the fence after frame 0, 2, 4, 6, 8, ...
    commandQueue.Signal(fence, fenceValue);
    // wait for the fence after frame 1, 3, 5, 7, 9, ...
    int currentFence = fenceValue;
    if (fence.CompletedValue < currentFence)
        fence.SetEventOnCompletion(currentFence, fenceEvent.SafeWaitHandle.DangerousGetHandle());

Have I understood the idea correctly (I think I do)? Perhaps something here gets done in the wrong order?



Share this post

Link to post
Share on other sites

That's not quite what I meant. You'll still want to signal your fence and wait on it every frame, you just need to wait on the value one frame later. The first frame you don't need to wait because there was no "previous" frame, but you do need to wait for every frame after that. Here's what my code looks like, minus a few things that aren't relevant:

void EndFrame(IDXGISwapChain4* swapChain, uint32 syncIntervals)

    ID3D12CommandList* commandLists[] = { CmdList };
    GfxQueue->ExecuteCommandLists(ArraySize_(commandLists), commandLists);

    // Present the frame.
    DXCall(swapChain->Present(syncIntervals, syncIntervals == 0 ? DXGI_PRESENT_ALLOW_TEARING : 0));


    // Signal the fence with the current frame number, so that we can check back on it
    FrameFence.Signal(GfxQueue, CurrentCPUFrame);

    // Wait for the GPU to catch up before we stomp an executing command buffer
    const uint64 gpuLag = DX12::CurrentCPUFrame - DX12::CurrentGPUFrame;
    Assert_(gpuLag <= DX12::RenderLatency);
    if(gpuLag >= DX12::RenderLatency)
        // Make sure that the previous frame is finished
        FrameFence.Wait(DX12::CurrentGPUFrame + 1);

    CurrFrameIdx = DX12::CurrentCPUFrame % NumCmdAllocators;

    // Prepare the command buffers to be used for the next frame
    DXCall(CmdList->Reset(CmdAllocators[CurrFrameIdx], nullptr));


Share this post

Link to post
Share on other sites
13 hours ago, MJP said:

That's not quite what I meant. You'll still want to signal your fence and wait on it every frame, you just need to wait on the value one frame later. The first frame you don't need to wait because there was no "previous" frame, but you do need to wait for every frame after that. Here's what my code looks like, minus a few things that aren't relevant:

MJP I didn't look at the linked code but do you do anything for frame pacing in the full code?  I see that gamers on the internet complain about frame pacing quite a lot when they seem to percieve issues with it.  Your code snippet above would render a certain number of frames on the CPU as fast as possible and then wait for the GPU to catch up.  Wouldn't this lead to jerkiness in the input sampling and simulation?  Would you just add some timer code to the above to delay the next iteration of the game loop if necessary?  Or is it more complex?

Share this post

Link to post
Share on other sites

The code that I posted will let the CPU get no more than 1 frame ahead of the GPU. After the CPU submits command lists to the direct queue, it waits for the previous GPU frame to finish. So if the GPU is taking more time to complete a frame than the CPU is (or if VSYNC is enabled), the CPU will be effectively throttled by fence and will stay tied to the GPU's effective framerate. 

In my experience, frame pacing issues usually come from situations where the time delta being used for updating the game's simulation doesn't match the rate at which frames are actually presented on the screen. This can happen very easily if you use the length of the previous frame as your delta for the next frame. When you do this, you're basically saying "I expect the next frame to take just as long to update and render as the previous frame". This assumption will hold when you're locked at a steady framerate (usually due to VSYNC), but if your framerate is erratic then you will likely have mismatches between your simulation time delta and the actual frame time. It can be especially bad when missing VSYNC, since your frame times may go from 16.6ms up to 33.3ms, and perhaps oscillate back and forth.

I would probably suggest the following for mitigating this issue:

  1. Enable VSYNC, and never miss a frame! This will you 100% smooth results, but obviously it's much easier said than done.
  2. Detect when you're not making VSYNC, and increase the sync interval to 2. This will effectively halve your framerate (for instance, you'll go from 60Hz to 30Hz on a 60Hz display), but that may be preferable to "mostly" making full framerate with frequent dips.
  3. Alternatively, disable VSYNC when you're not quite making it. This is common on consoles, where you have the ability to do this much better than you do on PC. It's good for when you're just barely missing your VSYNC rate, since in that case most of the screen will still get updated at full rate (however there will be a horizontal tear line). It will also keep you from dropping to half the VSYNC rate, which will reduce the error in your time delta assumption.
  4. Triple buffering can also give you similar results to disabling VSYNC, but also prevent tearing (note that non-fullscreen D3D apps on Windows are effectively triple-buffered by default since they go through the desktop compositor)
  5. You could also try filtering your time deltas a bit to keep them from getting too erratic when you don't make VSYNC. I've never tried this myself, but it's possible that having a more consistent but smaller errors in your time delta is better than less frequent but larger errors. 

Hopefully someone else can chime in with more thoughts if they have experience with this. I haven't really done any specific research or experimentation with this issue outside of making games feel good when they ship, so don't consider me an authority on this issue. :)

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement