Sign in to follow this  
Prune

QueryPerformanceCounter and SpeedStep, EIST, etc.

Recommended Posts

If SpeedStep, EIST or other clock-speed varying features are enabled on a system, will that make QueryPerformanceCounter() inaccurate as a timer? That is, I would have to also check QueryPerformanceFrequency() several times a second... If I stick with timeGetTime(), is it guaranteed that say timeBeginPeriod(2) will give me at least 2 ms resolution (about what's acceptable)? I also want to ask if gettimeofday() on Linux might also be susceptible to clock speed variations, or is it more similar to timeGetTime() than QueryPerformanceCounter() [Edit] Then there's also GetSystemTimeAsFileTime on Windows and the times() and sysconf(_SC_CLK_TCK) pair on Linux. Basically I want to know for both Windows and Linux, assuming CPU is at least Core i-7, which is the method with least overhead to get at least 2 ms-accurate elapsed time (only relative, and I don't care about rollovers after X days) [Edited by - Prune on June 3, 2009 9:04:43 PM]

Share this post


Link to post
Share on other sites
I think QueryPerformanceFrequency gets the frequency of the performance counter timer and not the actual CPU clock frequency. So if you vary the CPU speed that shouldn't affect the other timers. (Imagine over-clocking your CPU and having the system time gets faster [lol])

Share this post


Link to post
Share on other sites
Quote:
Original post by Prune
If SpeedStep, EIST or other clock-speed varying features are enabled on a system, will that make QueryPerformanceCounter() inaccurate as a timer? That is, I would have to also check QueryPerformanceFrequency() several times a second...
On some CPUs, QueryPerformanceCounter() reads the CPU clocks, which causes it to be broken on speedstep CPUs. There's a patch you can download for AMD processors that fixes that. The value returned from QueryPerformanceFrequency should not change while the system is running (As per the Documentation for the function).

Quote:
Original post by Prune
If I stick with timeGetTime(), is it guaranteed that say timeBeginPeriod(2) will give me at least 2 ms resolution (about what's acceptable)?
I've not had timeGetTime() fail to give me less than 1ms accuracy before, but the documentation does say that it has an arbitrary limit.

Quote:
Original post by Prune
I also want to ask if gettimeofday() on Linux might also be susceptible to clock speed variations, or is it more similar to timeGetTime() than QueryPerformanceCounter()
[Edit] Then there's also GetSystemTimeAsFileTime on Windows and the times() and sysconf(_SC_CLK_TCK) pair on Linux.
Basically I want to know for both Windows and Linux, assuming CPU is at least Core i-7, which is the method with least overhead to get at least 2 ms-accurate elapsed time (only relative, and I don't care about rollovers after X days)
This is worth reading.

Share this post


Link to post
Share on other sites
Under Windows, QPC supposedly works reliably under Vista and Windows 7 (talking hearsay, not tested), under XP and earlier, it sucks big time, being unreliable and full of quirks.

Quote:
I've not had timeGetTime() fail to give me less than 1ms accuracy before
Not seen that happen either, timeGetTime() is actually quite ok if 1ms resolution is good enough.


Regarding Linux gettimeofday() I can't tell much, but if it works anything like the other timers in Linux, it's just awesome.
I was once curious if epoll_waiting on a timerfd was sufficiently accurate for my needs and reliable, so I tested how well it performed with ever decreasing times.

Turned out that timeing a program that waited for 10 seconds in steps of 10us returns pretty much exactly 10 seconds wall time and pretty much zero everything else, if you subtract the load/teardown overhead that an empty int main() {return 0;} program has too.
I found that whatever they did to implement timers is pretty darn amazing (other than for example AIO which just sucks), big big plus for Linux here.

Similar can be said about sleep accurracy, which is considerably below "a dozen milliseconds" under Linux. I've not tested this one extensively, but it's definitively accurate in the sub-millisecond range.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this