RichieSams posted a topic in Graphics and GPU ProgrammingI have been doing some research on triple buffering. In one article, ([url="http://www.anandtech.com/show/2794"]http://www.anandtech.com/show/2794[/url]), it discribed the differences between 'true' triple buffering and render ahead. However, it didn't quite explain all my questions and my searchings on gamedev didn't really help either. So my questions are:[list=1] [*]Graphics drivers have the option to buffer a certain number of frames before enforcing a sync during a Present() call. This can be changed by the user (usually) with things such as nVidia control panel. Can this be set/forced the by application itself or is it strictly done by the driver? Also is this buffering in addition to any back-buffering done by DX? [*]DX 'triple buffering' is just a circular queue render ahead. Is there any way to implement manual back buffering in order to allow frame drop? That is: one buffer is the front buffer currently being blitted to the screen. While we wait for Vsync, the renderer flip-flops between two backbuffers. Whenever the vertical retract happens, it swaps the front buffer with the backbuffer currently not being rendered to. It doesn't look like you can access the internal pointers of DX and I think doing an actual memory copy of the buffers would be hugely inefficient, so my intial guess would be that this can not be done efficiently or at all. [/list] My current render schema is as follows: I have two logic pipelines processing frame n and n+1 in a partial overlap. They produce DX command lists to be consumed by the renderer. Whenever a pipeline finishes, it atomically swaps its command list buffer with the "Free command list". The pipelines are both task-set based, run on worker threads and keep each other running. The main thread does necessary winmessage handling like in any normal loop, then atomically swaps the pointer to the "In use Command list" and the "Free command list". It plays back the command list, rendering to a single DX backbuffer and then finally tries to Present, using the D3DPRESENT_DONOTWAIT flag. Then it loops back to the start and start over. Obviously some logic could be added to check if the command buffer actually changed, and therefore, no new render needs to happen. [img]http://i.imgur.com/eKTb1.png[/img] With this model, the renderer is only rendering the latest game state and while there still is a 'render ahead' queue, the queue is only recieving the most up to date renders. I would love to hear your opinions of the model. I'm quite new to the 3D rendering scene, so feel free to tell me of any errors. I'm curious to whether the DONOTWAIT is necessary. There have been some comments on the forums that the DONOTWAIT command can even be ignored by the driver, so not to rely on it. With vsync on, the 'extra time' recieved by using DONOTWAIT would be spent processing winmessages and the occasional new frame. The extra winmessage processing would be nice, and I wouldn't have to worry about missing the v-blank since present is just pushing to a queue, not actually swapping the buffers. (Correct me if I'm wrong) However, I don't know if all the extra cpu cycles spent would actually show a visible difference. I guess the only true test would be to try it and see, though let me know what you guys think. Thank you for reading and let me know if you need any more information from me.