# DX11 [dx11] vsync causes lag( little jumps from time to time)

This topic is 2395 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi

I always had this issue, but Im always forwarding for learning/caring with other stuff..
The thing is, if I enable vsync(present(1,0)), I notice some jumps, its specially noticeable with camera movments, this doesnt happen if I let the app fly its frames as it wiches..
This is also independent of drawing calls or whatever(it happens even if drawing a squad with a texture, and with the same intensity if I draw a 10MB model)..
It happens in debug(D3D11_CREATE_DEVICE_DEBUG) and release, x32 or x64.

So today I decided to investigate it, since im logging all frames and its delta times..Im not really sure if I found anything, but I did found that from times to times, I get a frame running at 0.03xx, witch is a 30fps value, and my vsynced app runs at 0.016 per frame...its just one or two frames, so Im not sure if those are indeed the ones that bumps, would one frame drop be so noticeable?

Here are some of my frames:
running normally:

 64: 0.016220 65: 0.016843 66: 0.016673 67: 0.016786 68: 0.016673 69: 0.016692 70: 0.016741 71: 0.016875 72: 0.016600 73: 0.016785 74: 0.016709 75: 0.016894 76: 0.016762 77: 0.016567 78: 0.016564 79: 0.017078 80: 0.016464 81: 0.016792 82: 0.016649 83: 0.016995 84: 0.016552 85: 0.016650 86: 0.016693 87: 0.016982

some lag detected:
 815: 0.016958 816: 0.017235 817: 0.016220 818: 0.016716 819: 0.016682 820: 0.017191 821: 0.031545 822: 0.001276 823: 0.016970 824: 0.016868 825: 0.016427 ... 1092: 0.030895 1093: 0.000968 1094: 0.015951 1095: 0.016976 1096: 0.017013 1097: 0.017220 1098: 0.016718 1099: 0.016997 ... 1103: 0.016959 1104: 0.017129 1105: 0.016124 1106: 0.016790 1107: 0.016914 1108: 0.023826 1109: 0.027350 1110: 0.016475 1111: 0.016714 1112: 0.016725 1113: 0.016943 1114: 0.016587 1115: 0.016668 ... 1626: 0.016876 1627: 0.016475 WM WM 1628: 0.016998 1629: 0.016402 WM WM 1630: 0.031143 1631: 0.000997 1632: 0.015713 WM WM 1633: 0.017343 1634: 0.016669 ... WM WM 1728: 0.016917 1729: 0.016425 WM WM 1730: 0.037844 1731: 0.000678 1732: 0.011250 WM WM 1733: 0.016637 1734: 0.016494 WM WM 1735: 0.017359 WM WM 1736: 0.016343 1737: 0.017384 

WM are windows messages..
What the hell can be causing this, its weird, when I searched for something like this(bumps), ppl solved by turning vsync ON, not vsync causing it...

##### Share on other sites
its just one or two frames, so Im not sure if those are indeed the ones that bumps, would one frame drop be so noticeable?[/quote]
Dropping from 60FPS to 30FPS, even if only a single render frame would cause a noticeable 'jump' if an object on the screen (or camera) is translating using a time delta.

What the hell can be causing this, its weird, when I searched for something like this(bumps), ppl solved by turning vsync ON, not vsync causing it...[/quote]
Glitches/bumps/jumps shouldn't be resolved with vSync unless their transformations are being updating on a frame by frame basis rather than time (you should almost always be using time for this). Vsync is implemented to reduce screen tearing, which looks like your screen has a horizontal rip in it.

I did found that from times to times, I get a frame running at 0.03xx, witch is a 30fps value[/quote]
This is likely due to the fact that your monitor is refreshing at ~60Hz, which means with vSync enabled, Present() will block until your monitor is in the vertical blank state, throttling your game to 60FPS. It will drop to 30FPS if one of the updates takes longer than 60Hz and misses one of these vBlanks, thus the next update won't get around until two vBlanks.

##### Share on other sites
[color="#1C2837"] [color="#1C2837"]if an object on the screen (or camera) is translating using a time delta.[/quote]
witch is true

[color="#1C2837"] if one of the updates takes longer than 60Hz[/quote]
when I turn vsync off, I dont get frames above 0.01, every frame is like 0.000615...Wich means(I believe) the slow down is related to the vsync causing it.

[color="#1C2837"]thus the next update won't get around until two vBlanks.[/quote]but in that case, every time a frame gets more than 0.016, i would get 0.03...? I mean, I never would get frames with delta between [0.016 .. 0.03]... Witch seems very unrealistic..what am I missing on this?

I tottaly understand vsync, (at least I though I did), but I saw present as a function that will block till certain vblanks number, in the case of 1, it waits for one, if its not before this one, it will present, not wait to the next...
The sdk says: "1,2,3,4 - Synchronize presentation after the n'th vertical blank." See, are you sure that if it gets more than 60hz it will wait for another? Cause if previous blank didnt presented anything, no tearing would occur anyway, so why wait for the next...

-_o my brain got a little blurried now

##### Share on other sites
Is this in fullscreen or window mode?

##### Share on other sites

Is this in fullscreen or window mode?

window mode
just tested in fullscreen, same thing

##### Share on other sites

So today I decided to investigate it, since im logging all frames and its delta times..Im not really sure if I found anything, but I did found that from times to times, I get a frame running at 0.03xx, witch is a 30fps value, and my vsynced app runs at 0.016 per frame...its just one or two frames, so Im not sure if those are indeed the ones that bumps, would one frame drop be so noticeable?

<...snip...>

What the hell can be causing this, its weird, when I searched for something like this(bumps), ppl solved by turning vsync ON, not vsync causing it...

Yes, it will most likely be noticeable. If you have constant values in your update loops, it will be significantly worse, as your objects will move/update at half speed for a frame. So if every frame you do something like object.position += 0.3, when one of these spikes occur, you're effectively going to be moving half as fast.

You might want to try enabling triple buffering in combination with vsync. If you enable VSync with a single back buffer in your swap chain, small hiccups like this are going to happen. If you add a second backbuffer to the chain, you can completely remove those hiccups with very little effort on your part.

Anandtech has a really good breakdown of why these hiccups happen, and how triple buffering addresses the problem.
http://www.anandtech.com/show/2794/3

##### Share on other sites

[quote name='Icebone1000' timestamp='1320081331' post='4878938']
So today I decided to investigate it, since im logging all frames and its delta times..Im not really sure if I found anything, but I did found that from times to times, I get a frame running at 0.03xx, witch is a 30fps value, and my vsynced app runs at 0.016 per frame...its just one or two frames, so Im not sure if those are indeed the ones that bumps, would one frame drop be so noticeable?

<...snip...>

What the hell can be causing this, its weird, when I searched for something like this(bumps), ppl solved by turning vsync ON, not vsync causing it...

Yes, it will most likely be noticeable. If you have constant values in your update loops, it will be significantly worse, as your objects will move/update at half speed for a frame. So if every frame you do something like object.position += 0.3, when one of these spikes occur, you're effectively going to be moving half as fast.

You might want to try enabling triple buffering in combination with vsync. If you enable VSync with a single back buffer in your swap chain, small hiccups like this are going to happen. If you add a second backbuffer to the chain, you can completely remove those hiccups with very little effort on your part.

Anandtech has a really good breakdown of why these hiccups happen, and how triple buffering addresses the problem.
http://www.anandtech.com/show/2794/3
[/quote]
Ive been reading about tripple buffering, and Im kind like "wtf? triple buffer is just magicly enabled by driver stuff?" I mean, as a programmer, I though tripple buffer(witch I was holding for future) is something Im as a graphics programmer would have to manage myself, setting the swap chain to have 2 back buffers, and then chosing( in my algorythm) when to render to the first and when to render to the second bbuffer, probaly involving multithreading.. Damn, how can tripple buffer be that automatic..( its what Im guessing from a fast read on articles)..

I mean, in d3d you the one who says witch is your render target, how can it be modified from outside?

##### Share on other sites
off the topic, in frank luna dx10 book:

BufferCount: The number of back buffers to use in the swap chain; we
usually only use one back buffer for double buffering, although you
could use two for triple buffering.

in dx sdk:
A value that describes the number of buffers in the swap chain, including the front buffer.

I always put 2 to this value, meaning front and back only...passing 1 to it would mean an error..but it works..makes me guess if the sdk have a wrong description

##### Share on other sites

Ive been reading about tripple buffering, and Im kind like "wtf? triple buffer is just magicly enabled by driver stuff?" I mean, as a programmer, I though tripple buffer(witch I was holding for future) is something Im as a graphics programmer would have to manage myself, setting the swap chain to have 2 back buffers, and then chosing( in my algorythm) when to render to the first and when to render to the second bbuffer, probaly involving multithreading.. Damn, how can tripple buffer be that automatic..( its what Im guessing from a fast read on articles)..

I mean, in d3d you the one who says witch is your render target, how can it be modified from outside?

Usually that is only for the OpenGL driver settings, which are a little looser about what you need to setup than DirectX. Even in DirectX, though, the driver is free to do all kinds of things with your rendering options under the hood (and it does). My video card has options to force a bunch of bells and whistles (such as anisotropic texture filtering) even in games that were made before these features existed.

off the topic, in frank luna dx10 book:

BufferCount: The number of back buffers to use in the swap chain; we
usually only use one back buffer for double buffering, although you
could use two for triple buffering.

in dx sdk:
A value that describes the number of buffers in the swap chain, including the front buffer.

I always put 2 to this value, meaning front and back only...passing 1 to it would mean an error..but it works..makes me guess if the sdk have a wrong description

According to the SDK remarks: "In full-screen mode, there is a dedicated front buffer; in windowed mode, the desktop is the front buffer."

##### Share on other sites

[quote name='kuroioranda' timestamp='1320103683' post='4879054']
[quote name='Icebone1000' timestamp='1320081331' post='4878938']
So today I decided to investigate it, since im logging all frames and its delta times..Im not really sure if I found anything, but I did found that from times to times, I get a frame running at 0.03xx, witch is a 30fps value, and my vsynced app runs at 0.016 per frame...its just one or two frames, so Im not sure if those are indeed the ones that bumps, would one frame drop be so noticeable?

<...snip...>

What the hell can be causing this, its weird, when I searched for something like this(bumps), ppl solved by turning vsync ON, not vsync causing it...

Yes, it will most likely be noticeable. If you have constant values in your update loops, it will be significantly worse, as your objects will move/update at half speed for a frame. So if every frame you do something like object.position += 0.3, when one of these spikes occur, you're effectively going to be moving half as fast.

You might want to try enabling triple buffering in combination with vsync. If you enable VSync with a single back buffer in your swap chain, small hiccups like this are going to happen. If you add a second backbuffer to the chain, you can completely remove those hiccups with very little effort on your part.

Anandtech has a really good breakdown of why these hiccups happen, and how triple buffering addresses the problem.
http://www.anandtech.com/show/2794/3
[/quote]
Ive been reading about tripple buffering, and Im kind like "wtf? triple buffer is just magicly enabled by driver stuff?" I mean, as a programmer, I though tripple buffer(witch I was holding for future) is something Im as a graphics programmer would have to manage myself, setting the swap chain to have 2 back buffers, and then chosing( in my algorythm) when to render to the first and when to render to the second bbuffer, probaly involving multithreading.. Damn, how can tripple buffer be that automatic..( its what Im guessing from a fast read on articles)..

I mean, in d3d you the one who says witch is your render target, how can it be modified from outside?

[/quote]

Triple buffering is fairly simple for a driver to do, you can force it in D3D using dxtweaker aswell.

To use Triple buffering in D3D you only need to set up a 3 buffer swapchain using swapChainDesc.BufferCount = 3, after that you can forget about it. Render Targets are not needed for this.

When you swap the buffers the driver shouldn't have to copy the content of the backbuffer to the frontbuffer, it should simply switch it so that the backbuffer becomes the frontbuffer and do rendering in the old frontbuffer. (If you run in Windowed mode there has to be a copy though)

Basically with double buffering you have:

Buffer1 (front)
Buffer2 (back)

You'll render your first frame to buffer2 and when you call present it will stall until the rendering is complete and then wait for a vblank, when it hits it will switch so that Buffer2 becomes the frontbuffer and is displayed while Buffer1 becomes the new backbuffer and your rendering will be done there.

With triple buffering you'll instead have:

Buffer1 (front)
Buffer2 (back)
Buffer3 (back)

There are basically 2 ways these 3 buffers can be used:

1) a swapchain where rendering is done to the 3 buffers in a fixed order and swaps will stall if both current backbuffers contain undisplayed frames. (Less GPU usage, higher percieved input latency(since you'll be displaying the frame you rendered 2 swaps ago instead of 1 swap ago it will take ~33.3ms for the player to see the results of his actions instead of 16.6ms) but you shouldn't get the spikes you might get with doublebuffering since a frame should always be ready and dropping below 60fps won't automatically result in 30fps but can allow you to sit at a stable 55 fps)
2) instant swaps between the 2 backbuffers overwriting undisplayed frames if necessary and swaps the inactive backbuffer(which contains the latest completed frame) with the frontbuffer when a vblank occurs. (Very high GPU usage (unless you're cpu bound), no extra input lag and no freezes, framerates can become higher than the monitors refreshrate even with v-sync enabled (extra frames are just tossed away though))

• 9
• 23
• 10
• 19
• ### Similar Content

• By chiffre
Introduction:
In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
(TLDR at bottom)
The Actual Post:
To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
To add some context, let me define a struct as an example:
struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape.
TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
• By cozzie
Hi all,
I was wondering it it matters in which order you draw 2D and 3D items, looking at the BeginDraw/EndDraw calls on a D2D rendertarget.
The order in which you do the actual draw calls is clear, 3D first then 2D, means the 2D (DrawText in this case) is in front of the 3D scene.
The question is mainly about when to call the BeginDraw and EndDraw.
Note that I'm drawing D2D stuff through a DXGI surface linked to the 3D RT.
Option 1:
A - Begin frame, clear D3D RT
B - Draw 3D
C - BeginDraw D2D RT
D - Draw 2D
E - EndDraw D2D RT
F - Present
Option 2:
A - Begin frame, clear D3D RT + BeginDraw D2D RT
B - Draw 3D
C - Draw 2D
D - EndDraw D2D RT
E- Present
Would there be a difference (performance/issue?) in using option 2? (versus 1)
Any input is appreciated.

• Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?
• By cozzie
Hi all,
Last week I noticed that when I run my test application(s) in Renderdoc, it crashes when it enable my code that uses D2D/DirectWrite. In Visual Studio no issues occur (debug or release), but when I run the same executable in Renderdoc, it crashes somehow (assert of D2D rendertarget or without any information). Before I spend hours on debugging/ figuring it out, does someone have experience with this symptom and/or know if Renderdoc has known issues with D2D? (if so, that would be bad news for debugging my application in the future );
I can also post some more information on what happens, code and which code commented out, eliminates the problems (when running in RenderDoc).
Any input is appreciated.

• Hi Guys,
I understand how to create input layouts etc... But I am wondering is it at all possible to derive an input layout from a shader and create the input layout directly from this? (Rather than manually specifying the input layout format?)