• Advertisement
Sign in to follow this  

DX11 Engine design questions...

This topic is 2690 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys,

The porting of my old code base over to SlimDX (using exclusively Dx11) has been going quite well. But I'm not liking the way some of my old, naive code is looking. A lot of things are so radically different from Dx9/XNA -> Dx11 that it facilitates throwing the old junk away and writing completely new code. So I've been posed with a few questions that I'd like to get some peer feedback on.

FYI, so you know, I'm trying my best to abstract away a lot of the pain involved with writing straight DX applications with this engine, but not so much that those who WANT lots of nit-picky features can't get to them (not an easy feat, and sometimes not possible).

1) Graphics Settings :

Any engine worth its salt has an fairly intuitive way of changing graphics settings on the fly and often by configuration files; thus enabling programmers to create nice little graphics settings/options menus for their games. I'm brand new to DX11/SlimDX API, so I'm probably missing a lot of stuff and just not seeing it. I'm from the days of "PresentationParameters" and all that jazz. :) So I'm wondering, to begin with, what range of graphics settings/options will be vital to a SlimDX/DX11-based engine? What things do I need to look up and learn about to make an effective graphics settings system? Any particular classes in the API that can help me not reinvent the wheel (as I often do when I'm ignorant of an API's features)? What things are/may be rarely used and not very important to include?

One other question that's been bugging me is about the ModeDescription property for SwapChainDescriptions, concerning their refresh rates. Obviously, when windowed, your application can effectively use any resolution which will fit in the display bounds and match the window size. But what about the refresh rate? Do we have to perfectly match the display mode's refresh rate in windowed mode, or is it only a requirement in full-screen? I remember the DX11 doc said it's vital to get the refresh rate correct in full-screen.

Also, are there any examples on the proper way to switch graphics (Device/SwapChain) settings at runtime? I'm not really sure if the way I switch from windowed to fullscreen is a "good" way of doing things lol. The limited samples and incomplete docs have left me guessing on a lot of things. :P

2) "Main Loop" and timing:

I've already implemented a new, internal clock/timer class which is very similar to the one buried in the XNA framework; although it's lighter and cleaner imho. I've tested it and it's as accurate as it gets on my system. In XNA, the "Game" class is fixed-timestep by default, and has a target update interval of 1/60th second (roughly 16.666something6667-something ms). Question is, what is the best way to ensure you stay within a reasonable tolerance of your update interval? What if the loop completes super fast and you have only a tiny bit of elapsed time between updates? Do you intentionally wait a bit and try to adjust the speed? Or should you just rock out the Update(...) calls as fast as they can go?

I'm also planning to add the ability to have parallel update and rendering loops on separate threads; each keeping their own time and handling their own problems. So I'm wondering if anyone has some advice for me on this front as well.

Wow, big post. Any input/tips/advice will be greatly appreciated. I'm just trying to figure out where I'm going with my ideas and what things I should be aware of before I make big, time consuming mistakes!

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by keinmann
What things do I need to look up and learn about to make an effective graphics settings system?
I'm not particularly familiar with the API, but DX11 has "feature levels" that describe what the graphics card is capable of. If you're using new features added in DX10/11, but want to have fallbacks for older-cards, make sure you check the card's feature level.
Quote:
Question is, what is the best way to ensure you stay within a reasonable tolerance of your update interval? What if the loop completes super fast and you have only a tiny bit of elapsed time between updates? Do you intentionally wait a bit and try to adjust the speed? Or should you just rock out the Update(...) calls as fast as they can go?
A lot of games use a fixed time-step, where update is only called if 16.67ms have elapsed. If 33.3ms have elapsed, you call update twice, etc...
If less than 16ms have elapsed, you hand your time-slice back to the CPU.
On Windows there's functions like YieldProcessor, SwitchToThread, Sleep, etc for this.
Quote:
I'm also planning to add the ability to have parallel update and rendering loops on separate threads; each keeping their own time and handling their own problems. So I'm wondering if anyone has some advice for me on this front as well.
If you go down this path, I'd keep them separated and communicate via double-buffered state. To do this, you have two 'communication' structures. At any one time, the Update thread is writing to one, and the Render thread is reading from the other. At the end of each frame, the threads synch up and swap structures with each other.

However, this isn't the most scalable approach to threading -- you're targeting a dual core processor. Quad-core is standard on PCs now, and consoles have been doing ~6 hardware threads for years now.
Another approach is the task-pool model, where you make one worker thread per core (or per hardware thread), and then break up your update and render code into lots of isolated tasks that communicate via asynchronous message passing.

Share this post


Link to post
Share on other sites
Quote:
Original post by keinmann
Question is, what is the best way to ensure you stay within a reasonable tolerance of your update interval? What if the loop completes super fast and you have only a tiny bit of elapsed time between updates? Do you intentionally wait a bit and try to adjust the speed? Or should you just rock out the Update(...) calls as fast as they can go?


A common way is to use an accumulator.

float accumulator;
const float TimeStep = 0.2f;
public void Frame(float dt) {
accumulator += dt;
while (accumulator > TimeStep) {
accumulator -= TimeStep;
Update(TimeStep); // fixed timestep update
}
}

Share this post


Link to post
Share on other sites
To add to the last post, here is a full article on how to manage fixed timesteps well. It is pretty good.

http://gafferongames.com/game-physics/fix-your-timestep/

Share this post


Link to post
Share on other sites
Thanks for the answers everyone. And I like the article, Dranith. I've also used variable time-step and just let it run as fast as possible. But now I know exactly how to implement a reliable fixed-timestep system. I think it will be important to do so with the accuracy I demand of the physics, especially aerodynamics.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Hawkblood
      I've been away for a VERY long time, so if this topic has already been discussed, I couldn't find it.
      I started using VS2017 recently and I keep getting warnings like this:
      1>c:\program files (x86)\microsoft directx sdk (june 2010)\include\d3d10.h(609): warning C4005: 'D3D10_ERROR_FILE_NOT_FOUND': macro redefinition (compiling source file test.cpp) 1>C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\shared\winerror.h(54103): note: see previous definition of 'D3D10_ERROR_FILE_NOT_FOUND' (compiling source file test.cpp) It pops up for various things, but the reasons are all the same. Something is already defined.....
      I have DXSDK June2010 and referencing the .lib and .h set correctly (otherwise I wouldn't get this, I'd get errors)
      Is there a way to correct this issue or do I just have to live with it?
       
      Also (a little off-topic) the compiler doesn't like to compile my code if I make very small changes.... What's up with that? Can I change it? Google is no help.
    • By d3daywan
      【DirectX9 Get shader bytecode】
      I hook DrawIndexedPrimitive
          HookCode(PPointer(g_DeviceBaseAddr + $148)^,@NewDrawIndexedPrimitive, @OldDrawIndexedPrimitive);    
          function NewDrawIndexedPrimitive(const Device:IDirect3DDevice9;_Type: TD3DPrimitiveType; BaseVertexIndex: Integer; MinVertexIndex, NumVertices, startIndex, primCount: LongWord): HResult; stdcall;
          var
              ppShader: IDirect3DVertexShader9;
              _Code:Pointer;
              _CodeLen:Cardinal;
          begin
              Device.GetVertexShader(ppShader);//<------1.Get ShaderObject(ppShader)
              ppShader.GetFunction(nil,_CodeLen);
              GetMem(_Code,_CodeLen);
              ppShader.GetFunction(_Code,_CodeLen);//<----2.Get bytecode from ShaderObject(ppShader)
              Result:=OldDrawIndexedPrimitive(Self,_Type,BaseVertexIndex,MinVertexIndex, NumVertices, startIndex, primCount);
          end;
      【How to DirectX11 Get VSShader bytecode?】
      I hook DrawIndexed
          pDrawIndexed:=PPointer(PUINT_PTR(UINT_PTR(g_ImmContext)+0)^ + 12 * SizeOf(Pointer))^;
          HookCode(pDrawIndexed,@NewDrawIndexed,@OldDrawIndexed);
          procedure NewDrawIndexed(g_Real_ImmContext:ID3D11DeviceContext;IndexCount:     UINT;StartIndexLocation: UINT;BaseVertexLocation: Integer); stdcall;
          var
              game_pVertexShader: ID3D11VertexShader;
                  ppClassInstances: ID3D11ClassInstance;
                  NumClassInstances: UINT
          begin
              g_Real_ImmContext.VSGetShader(game_pVertexShader,ppClassInstances,NumClassInstances);    //<------1.Get ShaderObject(game_pVertexShader)
              .....//<----【2.Here's how to get bytecode from ShaderObject(game_pVertexShader)?】
              OldDrawIndexed(ImmContext, IndexCount, StartIndexLocation, BaseVertexLocation);
          end;

      Another way:
      HOOK CreateVertexShader()
      but
      HOOK need to be created before the game CreateVertexShader, HOOK will not get bytecode if the game is running later,I need to get bytecode at any time like DirectX9
    • By matt77hias
      Is it ok to bind nullptr shader resource views and sample them in some shader? I.e. is the resulting behavior deterministic and consistent across GPU drivers? Or should one rather bind an SRV to a texture having just a single black texel?
    • By matt77hias
      Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext?
      If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a
      IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
    • By pcmaster
      Hi all, I have another "niche" architecture error
      On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast.
      The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped.
      Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP.
      The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this:
       
      RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV.
      Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't.
      Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow
      Thanks!
  • Advertisement