Sign in to follow this  
RichieSams

Possibility of 'frame drop' style triple buffering in DX3D

Recommended Posts

RichieSams    105
I have been doing some research on triple buffering. In one article, ([url="http://www.anandtech.com/show/2794"]http://www.anandtech.com/show/2794[/url]), it discribed the differences between 'true' triple buffering and render ahead. However, it didn't quite explain all my questions and my searchings on gamedev didn't really help either. So my questions are:[list=1]
[*]Graphics drivers have the option to buffer a certain number of frames before enforcing a sync during a Present() call. This can be changed by the user (usually) with things such as nVidia control panel. Can this be set/forced the by application itself or is it strictly done by the driver? Also is this buffering in addition to any back-buffering done by DX?
[*]DX 'triple buffering' is just a circular queue render ahead. Is there any way to implement manual back buffering in order to allow frame drop? That is: one buffer is the front buffer currently being blitted to the screen. While we wait for Vsync, the renderer flip-flops between two backbuffers. Whenever the vertical retract happens, it swaps the front buffer with the backbuffer currently not being rendered to. It doesn't look like you can access the internal pointers of DX and I think doing an actual memory copy of the buffers would be hugely inefficient, so my intial guess would be that this can not be done efficiently or at all.
[/list]
My current render schema is as follows:

I have two logic pipelines processing frame n and n+1 in a partial overlap. They produce DX command lists to be consumed by the renderer. Whenever a pipeline finishes, it atomically swaps its command list buffer with the "Free command list". The pipelines are both task-set based, run on worker threads and keep each other running.

The main thread does necessary winmessage handling like in any normal loop, then atomically swaps the pointer to the "In use Command list" and the "Free command list". It plays back the command list, rendering to a single DX backbuffer and then finally tries to Present, using the D3DPRESENT_DONOTWAIT flag. Then it loops back to the start and start over.

Obviously some logic could be added to check if the command buffer actually changed, and therefore, no new render needs to happen.

[img]http://i.imgur.com/eKTb1.png[/img]
With this model, the renderer is only rendering the latest game state and while there still is a 'render ahead' queue, the queue is only recieving the most up to date renders.

I would love to hear your opinions of the model. I'm quite new to the 3D rendering scene, so feel free to tell me of any errors.

I'm curious to whether the DONOTWAIT is necessary. There have been some comments on the forums that the DONOTWAIT command can even be ignored by the driver, so not to rely on it. With vsync on, the 'extra time' recieved by using DONOTWAIT would be spent processing winmessages and the occasional new frame. The extra winmessage processing would be nice, and I wouldn't have to worry about missing the v-blank since present is just pushing to a queue, not actually swapping the buffers. (Correct me if I'm wrong) However, I don't know if all the extra cpu cycles spent would actually show a visible difference. I guess the only true test would be to try it and see, though let me know what you guys think.

Thank you for reading and let me know if you need any more information from me.

Share this post


Link to post
Share on other sites
mhagain    13430
For 1, and under D3D11, you can use IDXGIDevice1::SetMaximumFrameLatency, using QueryInterface to obtain the IDXGIDevice1 interface, like so: "if (SUCCEEDED (device->QueryInterface (__uuidof (IDXGIDevice1), (void **) &pDXGIDevice)))" (and remembering to Release it when done). No idea how well (or otherwise) it interoperates with forced settings via driver control panels though.

For 2, have a read of http://tomsdxfaq.blogspot.com/2006_04_01_archive.html#114482869432550076#114482869432550076 - there's a useful discussion of the DO_NOT_WAIT flag in it.

In general I prefer to just blast everything to the screen every frame - Clear/Draw*/Present. Tricksy schemes to try to avoid rendering if you detect that you don't need to just pile up code complexity and cause headaches later on.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this