Alternative to windows high-performance timer (QueryPerformanceCounter())?

Started by
18 comments, last by nobodynews 15 years, 8 months ago
Quote:Original post by Mercenarey
Bah, I had no idea this would be so complicated. Incredible that no standard exists for a thing like timing, a part so fundamental and integral to computing. A standard that should exist in both hardware and software IMO.

The C++ standard can't possibly incorporate a function to do this. The underlying assembly instruction that gives the programmer access to the CPU internal timer is RDTSC, and according to Wikipedia, "RDTSC was formally included in the X86 assembly language in Pentium II."

The Pentium II was introduced in 1997, and the C++ standard was first published in 1998. If they had included this as a requirement for a compiler to be compliant, it would be impossible to write C++ compilers for many platforms, including most of the personal computers in existence at the time the standard was published.

By the way, what happened to your rating?

Advertisement
Intel's threading building blocks have a multiplatform, thread safe and (apparently) high resolution timer. This might be of interested to you. Whilst it isn't a one platform end-all/be-all (as indicated above this doesn't exists) it might still be of help.
"By the way, what happened to your rating?"

I had an opinion. That can be quite dangerous around here :).
It was a discussion I started on DirectX vs. OpenGL (whether OpenGL had wasted a golden opportunity when MS limited DX10 to Vista).


As for the standard:
Standards can be added later on. The history you mention is 10 years old. That is like medieval times in a business moving so fast as this one.
Again, opinion - which can be dangerous, hehe.


@_Sigma:
Interesting, I will have a look at that Intel implementation (even when it doesn't solve the problem, it looks like a nice implementation once I choose to go multiplatform).
Quote:CalvinI am only polite because I don't know enough foul languageQuote:Original post by superpigI think the reason your rating has dropped so much, Mercenarey, is that you come across as an arrogant asshole.
Quote:Original post by Mercenarey
"By the way, what happened to your rating?"

I had an opinion. That can be quite dangerous around here :).
It was a discussion I started on DirectX vs. OpenGL (whether OpenGL had wasted a golden opportunity when MS limited DX10 to Vista).

Well, judging only by this thread, you do seem to have strong opinions about things you don't fully understand. It doesn't particularly bother me, but I can see that it might bother others.

Quote:As for the standard:
Standards can be added later on. The history you mention is 10 years old. That is like medieval times in a business moving so fast as this one.
Again, opinion - which can be dangerous, hehe.

The more features they require of the hardware in the standard, the fewer platforms it will apply to. C++ can be used to program a router, a robot roaming around Mars or a washing machine. Some of those may not have a high-precision timer.

Another reason why it's not part of the standard is because it's not useful to enough people. Sure, there are a lot of people who don't timing at all, but that's the case for any other library feature. However, when considering high-precision timing, there's a high chance that the application is severely time-dependent for its operations, and therefore it will also involve a lot of platform-specific code for handling thread priority, real-time execution, and similar concerns that are coupled with high-precision timing (or, in short, if you don't need to be executed with millisecond precision, why would you need to tell time with millisecond precision?). For instance, running on an embedded chip to do robotic manipulation involves collaborating with a real-time OS, so one more function call isn't an issue. So, since there's a lot of platform-specific development going on anyway, there's no point in requiring a high-precision timer.

Of course, one cannot help but wonder why the sub-par "clock" implementation was kept (not only does it return a value that is NOT a time expressed in a valid unit, but it also FIXES the precision of the measure at compile-time). A smarter time measurement scheme would have been most welcome, especially one which could be implemented in terms of std::clock on dumb platforms, but could use the highest available precision on others.

In the end, though, I cannot help but wonder why you would need a high-performance timer in a video game setting. The two areas I would identify are profiling (at which point I would suggest using a non-intrusive profiler instead) and game logic timing (where an interpolated 15-millisecond precision is enough).
Accurate timing is far from trivial problem.

Merely having a variable that has "exact, accurate, high-resulution" timer is not enough. Consider that you want to obtain this value, but the function call may, sometimes, trigger cache fault and extra access delay of 200 cycles while reading and storing this data.

Another problem is resolution. How high? Is 1 ms enough? 1 us? Nuclear physicists would say that's useless, since they prefer ps (less than 1 cycle on modern CPUs).

How large would time-stamp be? The higher the resolution, the faster the wrap-around, or the bigger the time-stamp.

Speed of light is "slow" when you get into high-resolution timing. This requires a delay line, that is calibrated to the environment (temperature, perhaps pressure, humidity - circuits do deform).

Then there's drift. Today's wrist watches are 'accurate'. I believe they tend to be within +/- 1 second a day. That may seem good enough, but again, it depends on what you need - it's certainly problematic when synchronizing across different machines.

Most accurate high-resolution timing solutions require deterministic real-time processing. The device streams time-stamps, the measuring device samples data, both are delivered for processing to some non-real-time system as a block of data. Or they require hard-real-time OS. The problem with such OS is, if you skip a beat, it fails - might as well crash.

And it gets even funnier. Relativism is something people don't consider much. But with high-resolution timing, taking your laptop on a train, let alone airplane would cause havoc with values. GPS systems provide perhaps the most accurate timing available, since the algorithms they use rely to an extent on distortion of time on satellites. And then there's even different altitudes, where absolute velocity differs. This may seem far fetched, but simple calculation shows that these effects can be measures, and for applications (to apply - not computer program) they do affect values.

Last, there's price. Lower end timing circuits start at thousands of dollars. High-precision timers require calibration before use of all the participants, and need to work in controlled environment. They are also intended for short bursts.

For high-uptime systems, periodic synchronization and re-calibration is a given. And even there one faces a problem of being able to only achieve synchronization locally, between one's own machines.

Universal time, as used to measure yearly drifts, and determine true time (leap year, accurate down to fs) is measured by over one hundred institutions world-wide. Once a month, they submit their data, which is then statistically evaluated to determine universal time. The process takes about two months.

So even in the, by far most standardized and professional setting, obtaining accurate real-time is a process that is incredibly complex and time-consuming.

Engineering solution: Take the 1ms timer with 1 second drift over 24 hours, and design your algorithm around it. For higher-precision timing, just take into consideration the better timers, and work around their oddities. It's as good as it gets.
"The more features they require of the hardware in the standard, the fewer platforms it will apply to. C++ can be used to program a router, a robot roaming around Mars or a washing machine. Some of those may not have a high-precision timer."

Would it have to be a C++ standard? Maybe it could be a standard on another level? Maybe the Std library.

If it is possible to make hardware standards for graphics and drivers, it should certainly be possible for something as fundamental as a timer.
Quote:CalvinI am only polite because I don't know enough foul languageQuote:Original post by superpigI think the reason your rating has dropped so much, Mercenarey, is that you come across as an arrogant asshole.
Antheus:
"Another problem is resolution. How high? Is 1 ms enough? 1 us? Nuclear physicists would say that's useless, since they prefer ps (less than 1 cycle on modern CPUs)."

Good points. The development keeps moving and so would the timing resolution. It is hard to make a standard for a moving future.
Quote:CalvinI am only polite because I don't know enough foul languageQuote:Original post by superpigI think the reason your rating has dropped so much, Mercenarey, is that you come across as an arrogant asshole.
[nm]
Quote:CalvinI am only polite because I don't know enough foul languageQuote:Original post by superpigI think the reason your rating has dropped so much, Mercenarey, is that you come across as an arrogant asshole.
Nah, just make the function agnostic to future considerations. Have a function return "the highest resolution 'tick' count available" and another function that returns "the amount of time in seconds per tick" and this can give you enough information to develop any resolution timer upon. Basically a standard implementation of QueryPerformanceCounter and QueryPerformanceFrequency. Except I'd make a requirement that they return stable values rather than jump around like the Query* functions.

C++: A Dialog | C++0x Features: Part1 (lambdas, auto, static_assert) , Part 2 (rvalue references) , Part 3 (decltype) | Write Games | Fix Your Timestep!

This topic is closed to new replies.

Advertisement