Jump to content

  • Log In with Google      Sign In   
  • Create Account

Possibility of 'frame drop' style triple buffering in DX3D


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
1 reply to this topic

#1 RichieSams   Members   -  Reputation: 105

Like
0Likes
Like

Posted 14 December 2012 - 12:31 AM

I have been doing some research on triple buffering. In one article, (http://www.anandtech.com/show/2794), it discribed the differences between 'true' triple buffering and render ahead. However, it didn't quite explain all my questions and my searchings on gamedev didn't really help either. So my questions are:
  • Graphics drivers have the option to buffer a certain number of frames before enforcing a sync during a Present() call. This can be changed by the user (usually) with things such as nVidia control panel. Can this be set/forced the by application itself or is it strictly done by the driver? Also is this buffering in addition to any back-buffering done by DX?
  • DX 'triple buffering' is just a circular queue render ahead. Is there any way to implement manual back buffering in order to allow frame drop? That is: one buffer is the front buffer currently being blitted to the screen. While we wait for Vsync, the renderer flip-flops between two backbuffers. Whenever the vertical retract happens, it swaps the front buffer with the backbuffer currently not being rendered to. It doesn't look like you can access the internal pointers of DX and I think doing an actual memory copy of the buffers would be hugely inefficient, so my intial guess would be that this can not be done efficiently or at all.
My current render schema is as follows:

I have two logic pipelines processing frame n and n+1 in a partial overlap. They produce DX command lists to be consumed by the renderer. Whenever a pipeline finishes, it atomically swaps its command list buffer with the "Free command list". The pipelines are both task-set based, run on worker threads and keep each other running.

The main thread does necessary winmessage handling like in any normal loop, then atomically swaps the pointer to the "In use Command list" and the "Free command list". It plays back the command list, rendering to a single DX backbuffer and then finally tries to Present, using the D3DPRESENT_DONOTWAIT flag. Then it loops back to the start and start over.

Obviously some logic could be added to check if the command buffer actually changed, and therefore, no new render needs to happen.

Posted Image
With this model, the renderer is only rendering the latest game state and while there still is a 'render ahead' queue, the queue is only recieving the most up to date renders.

I would love to hear your opinions of the model. I'm quite new to the 3D rendering scene, so feel free to tell me of any errors.

I'm curious to whether the DONOTWAIT is necessary. There have been some comments on the forums that the DONOTWAIT command can even be ignored by the driver, so not to rely on it. With vsync on, the 'extra time' recieved by using DONOTWAIT would be spent processing winmessages and the occasional new frame. The extra winmessage processing would be nice, and I wouldn't have to worry about missing the v-blank since present is just pushing to a queue, not actually swapping the buffers. (Correct me if I'm wrong) However, I don't know if all the extra cpu cycles spent would actually show a visible difference. I guess the only true test would be to try it and see, though let me know what you guys think.

Thank you for reading and let me know if you need any more information from me.

Sponsor:

#2 mhagain   Crossbones+   -  Reputation: 8005

Like
2Likes
Like

Posted 14 December 2012 - 04:47 AM

For 1, and under D3D11, you can use IDXGIDevice1::SetMaximumFrameLatency, using QueryInterface to obtain the IDXGIDevice1 interface, like so: "if (SUCCEEDED (device->QueryInterface (__uuidof (IDXGIDevice1), (void **) &pDXGIDevice)))" (and remembering to Release it when done). No idea how well (or otherwise) it interoperates with forced settings via driver control panels though.

For 2, have a read of http://tomsdxfaq.blogspot.com/2006_04_01_archive.html#114482869432550076#114482869432550076 - there's a useful discussion of the DO_NOT_WAIT flag in it.

In general I prefer to just blast everything to the screen every frame - Clear/Draw*/Present. Tricksy schemes to try to avoid rendering if you detect that you don't need to just pile up code complexity and cause headaches later on.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS