• Advertisement
Sign in to follow this  

Alternative to windows high-performance timer (QueryPerformanceCounter())?

This topic is 3455 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I need to be independent of Windows, which Im not at the moment. My performance-timer class still depends on the QueryPerformanceCounter() function of windows, which is a problem, because it is at so low a level, that it links into too much stuff. Even when I factory my way out of the problems, I find it too much luggage to carry around the entire windows library, just to be able to do high performance timing. Do you guys know of an alternative? Can std::clock() do it at a resolution comparable to QueryPerformanceCounter()?

Share this post


Link to post
Share on other sites
Advertisement
There is probably no way around it.
Make it generic/plugable.

There is probably something like this on the intarwebs, using some #ifdefs.

Share this post


Link to post
Share on other sites
I usually keep my projects very close to platform independence, except for one module whose header is called system.h . Then I write different .cpp files to implement those functions on specific platforms. Then I use a slightly different list of files in the projects for different platforms.

system.h:
#ifndef SYSTEM_H_INCLUDED
#define SYSTEM_H_INCLUDED

double time_in_seconds();

#endif





system_posix.cpp:
#include "system.h"
#include <sys/time.h>

double time_in_seconds() {
struct timeval tp;
gettimeofday(&tp,0);
return tp.tv_sec+.000001*tp.tv_usec;
}





system_standard.cpp:
#include "system.h"
#include <ctime>

double time_in_seconds() {
return std::clock()/(double)CLOCKS_PER_SEC;
}




... etc.

Share this post


Link to post
Share on other sites
Alvaro:
Yes, I know that way of doing it. That was what my comment about "factory my way out of the problems" was about. I use a BaseTimer class, and then ask the factory to give me a timer. Only the factory knows of the platform specifics (not all computers can run the PerformanceTimer - old ones have problems, so I need that flexibility in any case, so I can switch over to a StandardTimer in case the computer doesn't support Performance).

Still, it would suit me alot better to just use some standard stuff to use in all cases :(
And it should be possible. If MS can do it, why not Std?

Anyway, I just wondered if anyone knew if someone had done it, it doesn't look that way :(

Share this post


Link to post
Share on other sites
Quote:
Original post by Mercenarey
Still, it would suit me alot better to just use some standard stuff to use in all cases :(
And it should be possible. If MS can do it, why not Std?

Well, you don't get to redefine the language, and as it is, standard C++ does not provide a better timer than clock().

Quote:
Anyway, I just wondered if anyone knew if someone had done it, it doesn't look that way :(

Well, I told you how to get a pretty accurate timer on POSIX systems, which these days covers just about everything that is not Windows.

Share this post


Link to post
Share on other sites
Im only running on windows for now. I will make a note of your Posix-implementation.


Btw. it's not really about rewriting the language. It is just a matter of implementing a function that does the same as the Microsoft one. And that one accesses some special hardware (crystal on the motherboard or something).

How come that Microsoft can access this but noone else?

Share this post


Link to post
Share on other sites
Actually, reading the Linux man pages I just found something that might be closer to what you are doing on Windows:
#include <time.h>

double time_in_seconds() {
struct timespec tp;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp);
return tp.tv_sec + .000000001 * tp.tv_nsec;
}




Compile with `-lrt'.

Share this post


Link to post
Share on other sites
Quote:
Original post by Mercenarey

Still, it would suit me alot better to just use some standard stuff to use in all cases :(
And it should be possible. If MS can do it, why not Std?


Because hardware doesn't provide such guarantees. All timing methods have some sort of issues, some worse than others. The problem here isn't with software, but hardware. Reliable and accurate timing just isn't that important on PCs, and for applications that do require it, there's specialized hardware.

Quote:
Anyway, I just wondered if anyone knew if someone had done it, it doesn't look that way :(


Jan Wassenberg has.

As it appears from his articles, QueryPerformanceCounter isn't reliable, and one needs to write custom driver to provide proper timing under Windows as well.

Share this post


Link to post
Share on other sites
Some salt for gettimeofday(...). Although it's claimed to have very high precisions I've seen a few linux boxes in which the granularity was way higher.
I don't use linux much anymore so the fact I already found a machine with this issue is quite indicative.
As with clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp);, this doesn't really look like a real time timer.

Quote:
Original post by Mercenarey
(not all computers can run the PerformanceTimer - old ones have problems, so I need that flexibility in any case, so I can switch over to a StandardTimer in case the computer doesn't support Performance).
I have been using the perf timer for years now and I had only a few glitches with it over the years. I just make sure the affinity mask is set correctly and it seems to work ok even on early multicores (although I admit I haven't tested much - only a few X2 and two Intel Quads).
Maybe it jerks on high system load as the article says. I'm not on a AAA budget so I am simply taking the risk.
Quote:

And it should be possible. If MS can do it, why not Std?
I suppose because MS it's basically x86. Alot of processors have RDTSC-like instructions at CPU ot system level but I suppose somebody may be using STD on cheap cell phones. You know, portability can be bad.

In my opinion, the performance counter is really unbeatable on most cases.

Share this post


Link to post
Share on other sites
Bah, I had no idea this would be so complicated. Incredible that no standard exists for a thing like timing, a part so fundamental and integral to computing. A standard that should exist in both hardware and software IMO.

Since my only problem really is living with having to #include windows, it is not really worth the effort atm.

Thanks for input guys. I never fail to get amazed at the level of expertise on this board.
(Too bad I can't seem to affect your ratings, because my own rating is so low)

Share this post


Link to post
Share on other sites
Quote:
Original post by Mercenarey
Bah, I had no idea this would be so complicated. Incredible that no standard exists for a thing like timing, a part so fundamental and integral to computing. A standard that should exist in both hardware and software IMO.

The C++ standard can't possibly incorporate a function to do this. The underlying assembly instruction that gives the programmer access to the CPU internal timer is RDTSC, and according to Wikipedia, "RDTSC was formally included in the X86 assembly language in Pentium II."

The Pentium II was introduced in 1997, and the C++ standard was first published in 1998. If they had included this as a requirement for a compiler to be compliant, it would be impossible to write C++ compilers for many platforms, including most of the personal computers in existence at the time the standard was published.

By the way, what happened to your rating?

Share this post


Link to post
Share on other sites
Intel's threading building blocks have a multiplatform, thread safe and (apparently) high resolution timer. This might be of interested to you. Whilst it isn't a one platform end-all/be-all (as indicated above this doesn't exists) it might still be of help.

Share this post


Link to post
Share on other sites
"By the way, what happened to your rating?"

I had an opinion. That can be quite dangerous around here :).
It was a discussion I started on DirectX vs. OpenGL (whether OpenGL had wasted a golden opportunity when MS limited DX10 to Vista).


As for the standard:
Standards can be added later on. The history you mention is 10 years old. That is like medieval times in a business moving so fast as this one.
Again, opinion - which can be dangerous, hehe.


@_Sigma:
Interesting, I will have a look at that Intel implementation (even when it doesn't solve the problem, it looks like a nice implementation once I choose to go multiplatform).

Share this post


Link to post
Share on other sites
Quote:
Original post by Mercenarey
"By the way, what happened to your rating?"

I had an opinion. That can be quite dangerous around here :).
It was a discussion I started on DirectX vs. OpenGL (whether OpenGL had wasted a golden opportunity when MS limited DX10 to Vista).

Well, judging only by this thread, you do seem to have strong opinions about things you don't fully understand. It doesn't particularly bother me, but I can see that it might bother others.

Quote:
As for the standard:
Standards can be added later on. The history you mention is 10 years old. That is like medieval times in a business moving so fast as this one.
Again, opinion - which can be dangerous, hehe.

The more features they require of the hardware in the standard, the fewer platforms it will apply to. C++ can be used to program a router, a robot roaming around Mars or a washing machine. Some of those may not have a high-precision timer.

Share this post


Link to post
Share on other sites
Another reason why it's not part of the standard is because it's not useful to enough people. Sure, there are a lot of people who don't timing at all, but that's the case for any other library feature. However, when considering high-precision timing, there's a high chance that the application is severely time-dependent for its operations, and therefore it will also involve a lot of platform-specific code for handling thread priority, real-time execution, and similar concerns that are coupled with high-precision timing (or, in short, if you don't need to be executed with millisecond precision, why would you need to tell time with millisecond precision?). For instance, running on an embedded chip to do robotic manipulation involves collaborating with a real-time OS, so one more function call isn't an issue. So, since there's a lot of platform-specific development going on anyway, there's no point in requiring a high-precision timer.

Of course, one cannot help but wonder why the sub-par "clock" implementation was kept (not only does it return a value that is NOT a time expressed in a valid unit, but it also FIXES the precision of the measure at compile-time). A smarter time measurement scheme would have been most welcome, especially one which could be implemented in terms of std::clock on dumb platforms, but could use the highest available precision on others.

In the end, though, I cannot help but wonder why you would need a high-performance timer in a video game setting. The two areas I would identify are profiling (at which point I would suggest using a non-intrusive profiler instead) and game logic timing (where an interpolated 15-millisecond precision is enough).

Share this post


Link to post
Share on other sites
Accurate timing is far from trivial problem.

Merely having a variable that has "exact, accurate, high-resulution" timer is not enough. Consider that you want to obtain this value, but the function call may, sometimes, trigger cache fault and extra access delay of 200 cycles while reading and storing this data.

Another problem is resolution. How high? Is 1 ms enough? 1 us? Nuclear physicists would say that's useless, since they prefer ps (less than 1 cycle on modern CPUs).

How large would time-stamp be? The higher the resolution, the faster the wrap-around, or the bigger the time-stamp.

Speed of light is "slow" when you get into high-resolution timing. This requires a delay line, that is calibrated to the environment (temperature, perhaps pressure, humidity - circuits do deform).

Then there's drift. Today's wrist watches are 'accurate'. I believe they tend to be within +/- 1 second a day. That may seem good enough, but again, it depends on what you need - it's certainly problematic when synchronizing across different machines.

Most accurate high-resolution timing solutions require deterministic real-time processing. The device streams time-stamps, the measuring device samples data, both are delivered for processing to some non-real-time system as a block of data. Or they require hard-real-time OS. The problem with such OS is, if you skip a beat, it fails - might as well crash.

And it gets even funnier. Relativism is something people don't consider much. But with high-resolution timing, taking your laptop on a train, let alone airplane would cause havoc with values. GPS systems provide perhaps the most accurate timing available, since the algorithms they use rely to an extent on distortion of time on satellites. And then there's even different altitudes, where absolute velocity differs. This may seem far fetched, but simple calculation shows that these effects can be measures, and for applications (to apply - not computer program) they do affect values.

Last, there's price. Lower end timing circuits start at thousands of dollars. High-precision timers require calibration before use of all the participants, and need to work in controlled environment. They are also intended for short bursts.

For high-uptime systems, periodic synchronization and re-calibration is a given. And even there one faces a problem of being able to only achieve synchronization locally, between one's own machines.

Universal time, as used to measure yearly drifts, and determine true time (leap year, accurate down to fs) is measured by over one hundred institutions world-wide. Once a month, they submit their data, which is then statistically evaluated to determine universal time. The process takes about two months.

So even in the, by far most standardized and professional setting, obtaining accurate real-time is a process that is incredibly complex and time-consuming.

Engineering solution: Take the 1ms timer with 1 second drift over 24 hours, and design your algorithm around it. For higher-precision timing, just take into consideration the better timers, and work around their oddities. It's as good as it gets.

Share this post


Link to post
Share on other sites
"The more features they require of the hardware in the standard, the fewer platforms it will apply to. C++ can be used to program a router, a robot roaming around Mars or a washing machine. Some of those may not have a high-precision timer."

Would it have to be a C++ standard? Maybe it could be a standard on another level? Maybe the Std library.

If it is possible to make hardware standards for graphics and drivers, it should certainly be possible for something as fundamental as a timer.

Share this post


Link to post
Share on other sites
Antheus:
"Another problem is resolution. How high? Is 1 ms enough? 1 us? Nuclear physicists would say that's useless, since they prefer ps (less than 1 cycle on modern CPUs)."

Good points. The development keeps moving and so would the timing resolution. It is hard to make a standard for a moving future.

Share this post


Link to post
Share on other sites
Nah, just make the function agnostic to future considerations. Have a function return "the highest resolution 'tick' count available" and another function that returns "the amount of time in seconds per tick" and this can give you enough information to develop any resolution timer upon. Basically a standard implementation of QueryPerformanceCounter and QueryPerformanceFrequency. Except I'd make a requirement that they return stable values rather than jump around like the Query* functions.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement