3D Emulate FAST tripple buffering in Direct3d9

Recommended Posts

There's no reason you cant have vsync and high fps. 

Can you explain your current setup/configuration of the swap chain, present mode, etc, what you expect to happen and what is happening? 

Triple buffering doesn't improve framerates in general; it just allows it to smooth out a jittering framerate / absorb occasional framerate spikes. 

Do you know what your current GPU time per frame is, and CPU time per frame not including Present?

Share this post

Link to post
Share on other sites

Can you also explain what do you mean by emulating triple buffering? Either your rendering occasionally halts to wait for the previous frame to be presented in full before reusing its buffer or it doesn't wait because it writes to a different buffer, there's no middle ground.

Share this post

Link to post
Share on other sites

If your GPU time is normally 1.66ms and you turn on vsync with a 60Hz display, your GPU frame time will get rounded up to 16.6ms and so the CPU will start getting too far ahead of the GPU, so D3D will block the CPU inside present for 15ms more than it used to (about 10x more). 

What you're describing could be normal/expected. 

Can you post some timings, your swap chain configuration, and what you expect to happen? 

Share this post

Link to post
Share on other sites

I expect double-buffering from vsync where it just blocks.  The code acts exactly as its supposed to.  That's why i want a solution to the normal broken design.

I want true tripler-buffering vsync, where it doesn't block but copies to another buffer, so I retain high fps.  For example Windows 10 via DWM borderless fullscreen has tripler buffering vsync with high fps, but its still not  as fast as true triple buffering would be in exclusive mode.

Edited by scragglypoo

Share this post

Link to post
Share on other sites

The solution you're looking for is difficult to build. What you want, is that every VSync you decide which frame to scan out, based on what is the most recently completed frame. The way that DWM accomplishes this is that every VSync, they wake up, look at what's most recently completed, and then copy/compose it into another surface and schedule it to be scanned out on the next VSync. This adds an extra copy and an extra frame of latency.

Trying to remove the extra frame of latency is possible if you wake up *before* the VSync instead of after, with enough time buffered to schedule the copy and have it complete right before the VSync. As it turns out, this is pretty difficult. Now that we've published some implementation details of Windows Mixed Reality via PresentMon, I can tell you that this is pretty much how it works, and it's very complicated.

Trying to remove the copy is also very difficult, because now not only do you need to decide what to flip based on what's completed, but now you need to decide what to render to based on when previous rendering completed, which means that you can't get any CPU/GPU parallelism or frame queueing. If you just render to the resources in-order, eventually rendering will block because you'll be trying to render to the on-screen surface. Using a copy here prevents this.

Note that I think NVIDIA does have an implementation of this, called Fast Sync, that they've implemented in hardware and their driver. I don't really have any technical details on how they made it work, but I have to imagine it's pretty complicated as well.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Announcements

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By JVGameDev
      Hello! I am looking for a concept artist for a 3d brawl game. I will need sketches done for characters, and arenas. I would prefer a hobbyist for free, but I will pay if the work i am getting is significantly better than a free hobbyists. If interested, please do email me at
    • By Nikola Tesla

                               I created this model in Maya and have been practicing on the side while studying for school. This may not be the best picture to show faces or line flow and the resolution does not help things either. Still, I would love some opinions on where I can improve on my 3D modeling. Also, this is just a skin model and I spent no time texturing the model or accounting for clothing. Thank you in advance for any feedback.
    • By G-Dot
      Hello everybody! I've got a little problem. I need to create jetpack action. The main target is when I will press some button on my keybord my character will fly in the sky and stay here for some time then he will remove to the ground. I'm working with Unreal Engine 4 with blueprints.
    • By OpaqueEncounter
      I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model.
      When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);. 
      I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is.
      Why is the value requiring normalization? under);
      float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection) { float4 worldPosition = mul(position, world); float4 viewPosition = mul(worldPosition, view); return mul(viewPosition, projection); } VertexShaderOutput VS(VertexShaderInput input) { VertexShaderOutput output; matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform)); output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection); float3 normal = mul(input.Normal, WorldInverseTranspose); normal = normalize(normal); float lightIntensity = -dot(normal, DiffuseLightDirection); output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f); output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds); return output; } float4 PS(VertexShaderOutput input) : SV_Target { return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color; }  
    • By pristondev
      Hey, Im using directx allocate hierarchy from dx9 to use a skinned mesh system.
      one mesh will be only the skeleton with all animations others meshes will be armor, head etc, already skinned with skeleton above. No animation, idle position with skin, thats all I want to use the animation from skeleton to other meshes, so this way I can customize character with different head, armor etc. What I was thinking its copy bone matrices from skeleton mesh to others meshes, but Im a bit confused yet what way I can do this.
  • Popular Now