Tick timing becomes more inaccurate as more time passes

Started by
5 comments, last by Bozebo 12 years, 11 months ago
[font="Arial Black"]solved[/font]


Hello again.

The timing of my game events (mostly interp/extrap and movement tracking) rely on a steady tick rate/heartbeat. I am struggling to find a way that doesn't drift farther away from accuracy the longer the program is running for.

At the moment, I am using the HPET through the Windows API functions QueryPerformanceCounter and QueryPerformanceFrequency. I have also tried a similar method using clock().

I have added code to print how many seconds (nearly exactly) that the application has been running for, I then compare this to the number of ticks passed and the time it takes for one tick. I understand that rounding issues will exist, but something odd seems to be happening.

Here is a pseudo-code version of my server's tick timing:

query performance counter value at app start
calculate counts per tick (tickrate 30, theoretical counter resolution of 200000 = 200000/30 = 6667 rounded)
calculate performance counter value for when the next tick should start (current + counts per tick)
set current tick to 1
set minutes passed to 0
while(running){
//do something to take up time like Sleep(1), or don't... doesn't change the outcome either way from my findings

get current counter value
calculate gap = current - when next tick should start
if(gap >= 0){ //if a new tick is due
increment the tick
calculate counter value for when the next tick should start (current - gap + counts per tick)
if(tickrate ticks have passed since this last resolved to true){
increment minutes passed
print exact seconds passed - (current counter - starting counter - gap)/counter frequency
}
}
}


Every time a minute has passed, the printed value in seconds drifts upwards by 0.09. Which proves that the tick count doesn't accurately represent the time passed, each tick is infact taking less time than it should by a tiny bit.

Here is another strange thing that happened. I originally wrote a class to calculate ticks on the client app using clock() and used the performance timer on the server. Tick signals reaching the client would be behind the ticks that the client was on (when it should have been the opposite). In an attempt to debug the server, I started making use of that class on the server too - the inaccuracy is the same as reported by my printed information - but the syncronization to the client started behaving as I was expecting in the first place (the client is always about 1 or 2 ticks behind the server). That seems quite paradoxical to me, but I won't worry about it just yet.

Here is a screenshot (using clock() on the server, but the command window shows the same results as I got using the high performance event timer):
tickTiming.png
Green underlined text is printed based on the number of ticks passed, in the same conditional statement it prints the calculated number of seconds passed. As you can see, this drifts upwards.

Essentially, I need to fix my code so that the inaccuracy doesn't keep drifting larger as more time passes.


Have I done something horribly wrong with my logic, or do I need to calculate the inaccuracy in runtime and account for it? I understand that rounding errors will occur, but I can't think of a way to do it that doesn't suffer from them.
Advertisement
You're supposed to target a time to update.

void Run()
{
boost::posix_time::ptime targetTime = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration targetTimeTD = targetTime.time_of_day();
targetTimeTD = boost::posix_time::microseconds(targetTimeTD.total_microseconds() / 100000 * 100000);
targetTime = boost::posix_time::ptime(targetTime.date(), targetTimeTD);
while(true)
{
//boost::posix_time::ptime currentTime = boost::posix_time::microsec_clock::local_time();
//Log[Logger::Debug] << "Current Time: " << boost::posix_time::to_simple_string(currentTime);
Update();
boost::posix_time::ptime timeAfterUpdate = boost::posix_time::microsec_clock::local_time();

long long int delayTime = std::max<long long int>(0, (targetTime - timeAfterUpdate).total_microseconds());
boost::this_thread::sleep(boost::posix_time::microseconds(delayTime));
targetTime += boost::posix_time::milliseconds(100);
}
}

In that code I'm updating every 100 ms. So it looks like:
100, 200, 300, 400, 500.
Now you have updates happening that take a few ms and you have to sleep until the next update so you sleep just enough to reach that time. So lets stay you're at 100 ms and your update took 5 ms then you'd sleep for 95 ms since since 200 ms - (100 ms + 5 ms) = 95 ms. Generically you sleep targetTime - timeAfterUpdate.

Hello again.

The timing of my game events (mostly interp/extrap and movement tracking) rely on a steady tick rate/heartbeat. I am struggling to find a way that doesn't drift farther away from accuracy the longer the program is running for.

At the moment, I am using the HPET through the Windows API functions QueryPerformanceCounter and QueryPerformanceFrequency. I have also tried a similar method using clock().

I have added code to print how many seconds (nearly exactly) that the application has been running for, I then compare this to the number of ticks passed and the time it takes for one tick. I understand that rounding issues will exist, but something odd seems to be happening.

Here is a pseudo-code version of my server's tick timing:

query performance counter value at app start
calculate counts per tick (tickrate 30, theoretical counter resolution of 200000 = 200000/30 = 6667 rounded)
calculate performance counter value for when the next tick should start (current + counts per tick)
set current tick to 1
set minutes passed to 0
while(running){
//do something to take up time like Sleep(1), or don't... doesn't change the outcome either way from my findings

get current counter value
calculate gap = current - when next tick should start
if(gap >= 0){ //if a new tick is due
increment the tick
calculate counter value for when the next tick should start (current - gap + counts per tick)
if(tickrate ticks have passed since this last resolved to true){
increment minutes passed
print exact seconds passed - (current counter - starting counter - gap)/counter frequency
}
}
}


Every time a minute has passed, the printed value in seconds drifts upwards by 0.09. Which proves that the tick count doesn't accurately represent the time passed, each tick is infact taking less time than it should by a tiny bit.

Here is another strange thing that happened. I originally wrote a class to calculate ticks on the client app using clock() and used the performance timer on the server. Tick signals reaching the client would be behind the ticks that the client was on (when it should have been the opposite). In an attempt to debug the server, I started making use of that class on the server too - the inaccuracy is the same as reported by my printed information - but the syncronization to the client started behaving as I was expecting in the first place (the client is always about 1 or 2 ticks behind the server). That seems quite paradoxical to me, but I won't worry about it just yet.

Here is a screenshot (using clock() on the server, but the command window shows the same results as I got using the high performance event timer):
tickTiming.png
Green underlined text is printed based on the number of ticks passed, in the same conditional statement it prints the calculated number of seconds passed. As you can see, this drifts upwards.

Essentially, I need to fix my code so that the inaccuracy doesn't keep drifting larger as more time passes.


Have I done something horribly wrong with my logic, or do I need to calculate the inaccuracy in runtime and account for it? I understand that rounding errors will occur, but I can't think of a way to do it that doesn't suffer from them.


It seems as if you're effectivly working with minutes that are 60.0362 seconds long, each tick is of the same length in your code.

With your HPET frequency of 2343808 and 30 ticks per second you should get one every: 78126,933333333333333333333333333 HPET ticks, you use the value 78126 cutting off the last 0.93333.... (integer division) when you calculate the elapsed time you use floating point division which means you don't treat time equally in both cases.
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
tickTiming2.png

I am not sure what I changed, but it seems to work very nicely now. Of course there is going to some inaccuracy involved with the "counts per tick" but I can either leave it as it is or incorporate some kind of self inaccuracy detection into it so that it adjusts counts-per-tick to sway the inaccuracy towards the other direction until it has drifted off by a certain amount again - and repeat in the opposite direction.

As long as it is accurate according to the average over large time periods it should serve its purpose extremely well.


I am not sure what I changed, but it seems to work very nicely now. Of course there is going to some inaccuracy involved with the "counts per tick" but I can either leave it as it is or incorporate some kind of self inaccuracy detection into it so that it adjusts counts-per-tick to sway the inaccuracy towards the other direction until it has drifted off by a certain amount again - and repeat in the opposite direction.

As long as it is accurate according to the average over large time periods it should serve its purpose extremely well.


The main thing to make sure is that you call QueryPerformanceCounter() *once* at the beginning of your program, and use this as the "baseline" as an int64
You also want to call QueryPerformanceFrequency() at that time, and save 1.0 / that-value-as-double as a conversion factor.
At some further point in time, when you want to know "what time is it," then do the following:
- QueryPerformanceCounter() (as int64)
- subtract your original baseline (as int64)
- multiply by conversion factor (as double)
This will give you seconds elapsed, as a double.

This may truncate time by one granule of the frequency, but only for that measurement. Because the next measurement is again relative to the first baseline, that error is not propagated.

Also, the main loop typically looks like:

last sim time = 0
baseline = 0
forever
- get current time relative to baseline
- render graphics based on current time (perhaps forward extrapolated from last sim time)
- n = (current time - last sim time) / sim duration
- while n >= 1
-- simulate 1 step
-- subtract 1 from n
-- add sim duration to last sim time
enum Bool { True, False, FileNotFound };

- QueryPerformanceCounter() (as int64)
- subtract your original baseline (as int64)
- multiply by conversion factor (as double)
This will give you seconds elapsed, as a double.



Repeating what hplus hinted: make sure you use the proper types.

A float's precision is 6 significant decimal digits. If your simulation has been running for over 1000 seconds you can't accurately see milliseconds any more.

A double will give you 15 significant decimal digits. With that you can still see milliseconds accurately for a very long time.

Keep your time as an int64 so you don't experience floating point drift over time.
OK. It turns out my timing was always fine, the problem was with how I was reporting the time passed.

Hplus, that is how I am doing it. My pseudo code was a bit over-simplified. The error in my code is only caused by my initial recording of the baseline (because it is taken during init instead of when the first ticks tarts) - so there is/was an overall error of about 0.003 seconds. I have since edited my ticker class to incorporate that and am now using the HPET on the client too.

Thanks for the help guys. I still think there will be a long term inaccuracy due to the calculation of countsPerTick (HPET frequency/tickrate) but I can make the code self detect its own inaccuracy and adjust countsPerTick +/- 1 to become inaccurate in the opposite direction and it will always remain just as accurate as the HPET on the machine in the long term. It starts off with each tick taking slightly more than 1 30th of a scond, I can make it take slightly less than 1 30th of a second after detecting it has drifted too far off accuracy in that direction and vice versa.

I now have the issue of dealing with sudden delays in TCP. It can be very smooth for ages and then get a burst of data that is an extra 200-600ms late (very easy to force by sending something at maximum bandwidth over the LAN from the host machine with ftp) - which is to be expected of TCP of course. It's fine on localhost though, which is all I need to demonstrate with this - I will make something else from the ground up with a UDP system. Hmmm... now to send floats...

This topic is closed to new replies.

Advertisement