Should I use timeBeginPeriod/timeEndPeriod?

Started by
9 comments, last by szecs 12 years, 8 months ago
I'm using timeGetTime and I just found those two commands and it sounds like some systems may have different defaults for the precision. Is it a good idea to use timeBeginPeriod and timeEndPeriod to make sure my code works the same on different systems?
Advertisement
Regardless of what you do with those functions, you may (read: will) still get different timing values on different systems. The correct approach is to ensure that your code is robust enough to absorb incidental timing differences.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

You can set / get the precision, but I'd rather just use [font=arial, sans-serif][size=2]QueryPerformanceCounter[/font]() instead anyway
AFAIK, timeBeginPeriod isn't primarily related timeGetTime -- it changes the granularity of Window's thread scheduling, which causes Windows to (globally, across all programs) switch threads more often.

It's a good idea to use the performance timer instead of timeGetTime if you need accurate timings.
Regardless of what you do with those functions, you may (read: will) still get different timing values on different systems. The correct approach is to ensure that your code is robust enough to absorb incidental timing differences.

I agree with this. Also, you can use timeGetDevCaps to obtain the min/max period you can pass to timeBeginPeriod/timeEndPeriod.

You can set / get the precision, but I'd rather just use [font="arial, sans-serif"]QueryPerformanceCounter[/font]() instead anyway


Are there any down sides to using this command over timeGetTime?

[quote name='clashie' timestamp='1313573781' post='4850205']
You can set / get the precision, but I'd rather just use [font="arial, sans-serif"]QueryPerformanceCounter[/font]() instead anyway

Are there any down sides to using this command over timeGetTime?[/quote]
Using performance counters requires quite a lot more code, and my personal experience is that timeGetTime is actually very accurate. So unless you need nano-second accuracy I wouldn't recommend them. Besides, my own experience with performance counters is not the best (they are hopelessly inaccurate if you are planning on using them to drive a timer, for instance).

my own experience with performance counters is not the best (they are hopelessly inaccurate if you are planning on using them to drive a timer, for instance).
I can't say that your experience is false, but I can say that the statement that the performance-counter is inaccurate for driving a timer is simply false.

[font="arial, verdana, tahoma, sans-serif"]
Are there any down sides to using this command over timeGetTime?
It's slightly more complex to use than the multimedia timer (timeGetTime) -- there's a frequency value as well as a counter value, and you've got to divide them to get actual time values. If you don't do this in double-precision, you get get jittery results.[/font]

With either timer (timeGetTime or the performance counter) you also have to be careful how you compute time deltas. If you only ever compare against last-frame's absolute time, you'll get drift in your timer (running too slow or too fast compared to real time). If you do write this kind of slightly-bugged code, then the results from timeGetTime will be worse, as the drift values will be in secondarily instead of nanoseconds.

[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]However, someone has already done all of this for you! I recommend just using timer_lib[/font][font="arial, verdana, tahoma, sans-serif"] -- free to use and implements an accurate timer across platforms.[/font]
--- an on-topic note, timer_lib does call [font="Courier New"]timeBeginPeriod(1)[/font] if it's compiled with [font="Courier New"]timeGetTime[/font] support.

[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]The one piece of FUD against the performance counter is that some original multi-core motherboard chipsets had a bug in their hardware timer that caused the performance counter to jump backwards in time occasionally. This has since been fixed, but if you were running an unpatched version of that motherboard (~10 years ago?) then games would be buggy for you.
[/font][font="arial, verdana, tahoma, sans-serif"]It's safe to ignore this bug (as there's a patch for people affected), but the above library has a #define that you can enable (USE_FALLBACK), which detects and corrects the problem using timeGetTime as a backup timer for people using those motherboards who haven't patched them.[/font]

With either timer (timeGetTime or the performance counter) you also have to be careful how you compute time deltas. If you only ever compare against last-frame's absolute time, you'll get drift in your timer (running too slow or too fast compared to real time). If you do write this kind of slightly-bugged code, then the results from timeGetTime will be worse, as the drift values will be in secondarily instead of nanoseconds.


Would you explain this a bit further?
Would you explain this a bit further?
For example, let's assume our timer has 1ms resolution (counts in whole milliseconds) and is accurate to + or - 0.5ms, and assume that the timer starts at 0.

In this table, the first column are real-world elapsed time per frame.
The second column is the real-world delta times per frame -- let's pretend we're alternating between 30hz and 60hz.
The third column are the absolute time values that our +/-0.5ms timer will report.
The fourth column are the range of differences between these reported values.
The fifth column is the reconstructed absolute time we'd get if we were to sum our calculated deltas.

actual abs|actual delta|reported abs| diff | sum of diffs
0.0 | 0.0 | 0- 0 | 0 | 0
33.3... | 33.3... | 33- 34 | 33 | 33
50.0... | 16.6... | 50- 50 | 16-17 | 49-50
83.3... | 33.3... | 83- 84 | 33-34 | 82-84
99.9... | 16.6... | 99-100 | 15-17 | 97-101
133.3... | 33.3... | 133-134 | 33-35 | 130-136
149.9... | 16.6... | 149-150 | 15-17 | 145-153

If the frame-rate holds steady, then the average error will pretty much even out, and even though you've got +/-0.5ms error, it won't really drift anywhere. But if you've got changing frame-rates, or any kind of bias at all in the timer's error (which is out of your control), then it will slowly drift out of sync with the real world. As you can see, the potential drift is already pretty bad after just half a dozen frames.

Even if you do have this kind of dirty timer, you can easily avoid this kind of error accumulation by never adding delta-time values to anything, which is easy for timers, but a bit harder for your physics etc....
e.g. Here's one kind of game object with a timer -- this version accumulates delta time errors:struct Bomb
{
void LightFuse( float delay ) { m_timer = delay; }
void Update ( float delta ) { m_timer -= delta; if( m_timer <= 0 ) Explode(); }
void Explode(){}
private: float m_timer;
};
Here's a better version that doesn't accumulate delta time errors:struct Bomb
{
void LightFuse( float delay, float abs ) { m_timer = abs+delay; }
void Update ( float delta, float abs ) { if( abs > m_timer ) Explode(); }
void Explode(){}
private: float m_timer;
};

This topic is closed to new replies.

Advertisement