What is the most accurate timer to test speed of code?

Started by
7 comments, last by johnnyBravo 19 years, 4 months ago
Hi, I want to test the speed of my code, so I was wondering what is the most accurate timer functions that I could use to do so? Thanks
Advertisement
Well your best bet for an accurate timer would be QueryPerformanceCounter(). Check MSDN for more detailed info


If you've got a chunk of code you want to time I'd suggest your best bet for an accurate timing would be to make sure you call the code something like 100 times, then divide the end time by 100 to get yourself an average.
Actually, RDTSC is even more accurate. You might want to read this article, however.

Regards,
Andre
Andre Loker | Personal blog on .NET
Instead of using a timer, use a profiler. Profilers are designed exactly for the purpose of helping you figure out how fast your code runs and which parts take how long to do whatever it is they do. They'll not only be more accurate than something homebrew, but they will also give you more information than you problaby would have logged (such as how many times each function runs, how long it takes on average, how long maximum and minimum, what percetage of the whole program's execution time does it take, etc).
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Although a profiler will give you lots of information, they can also hide a whole bunch of stuff too. For example, the profiler in MSCV6 excludes time spent in system calls (although you can reduce this) so functions such as new/malloc/delete/free take zero time when in fact they can be a potential bottleneck.

As with everything, understand the tools you're using and their limits.

Skizz
Would it be possible to get a precise measurement of time by counting the exact instructions used (during system calls too) and then for each one take the processor specifics (no. of cycles etc) into account?
Quote:Original post by Anonymous Poster
Would it be possible to get a precise measurement of time by counting the exact instructions used (during system calls too) and then for each one take the processor specifics (no. of cycles etc) into account?

Not on a modern PC. Cache misses, jumps, and the like all potentially add extra time that a simple cycle count wouldn't account for. Plus, I'm pretty sure some instructions can be done in parrallel.
a = a + bc = c * d

wouldn't take as long as
a = a + bc = a * d

because with the first both instructions can be done simultaneously, while the second has to wait for the addition to complete. I might be wrong on this point, but I'm pretty sure that's part of the reason jumps are so costly...if the processor guesses the wrong control path, it's going to have to give up the results of any calculations its already started when it gets to the actual jump.

CM
Skizz: Yes, you must definitely know your tools and some tools have major 'bugs', but if you use any newer profiler you should get representative information about what takes how long in your program.

Conner McCloud: All the rules for paralell execution and branching and other pipeline stuff are well defined, so a pretty good estimate could be had, but the cache misses really kill the idea. If your program runs by itself, it'd be theoretically possible to get an exact cycle time for everything, but once you throw in a multitasking OS and any other programs that might be sharing CPU time the cache misses become nondeterministic and thus its better to just profile the code unless you really want to work for years to get the min, max, and probably execution times for your code (which of course will then be invalid since the CPU you counted it for is now in a museum.

In other words: Profile > Timer > Cycle Counting
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Well what I'm doing this for is testing the best method of coding things, eg when a certain piece of code run one way, might go faster if i did change a few things.

I'd then put that in a large loop to find the difference of speed.

I think I'll just use queryperformance as it has some source there.

Just curious is this how'd i would do it in c++ :

LARGE_INTEGER freq, count1, count2;QueryPerformanceFrequency(&freq);QueryPerformanceCounter(&count1);QueryPerformanceCounter(&count2);LARGE_INTEGER overhead = count2 - count1;QueryPerformanceCounter(&count1);////my code here to test//QueryPerformanceCounter(&count2);std::cout << "Code took "<< count2- count1- overhead) / freq << " seconds";


As I have never used queryperformance stuff before, does this look right?

THanks

This topic is closed to new replies.

Advertisement