runtime of functions

Started by
6 comments, last by BiGF00T 19 years, 10 months ago
how can i measure the time that is needed to run a certain function? should i just read the starting time then loop the function 100000000x and then look at the time difference or is there a better method? sometimes i wonder if a function would be faster if i changed it. is there another way to determine if functionA() is faster than functionB()? thx BiGF00T
Now get down on your hands and knees and start repeating "Open Source Good, M$ Evil", smacking your head against the pavement after each repetition. Once you have completed your training you may change your first name to GNU/, to show that you are free from the slavery of the closed source world. -Michalson
Advertisement
Are these functions in a program you are writing? Or are they just algorithms you want to time?

If it''s the former then you''ll want to use a tool called a profiler, this can give you data such as how many times a function is called, average time it takes to run the function etc. If it''s the latter then you need a function that can return the current time (this may be the actual time or the time since the system started or the time since the program started, but it doesn''t matter) and then you run this time function at the beginning and end of the function you are timing, save the somewhere and the difference between the two is how long the function took.
thx for the reply
i''m not thinking about something specific atm. i was just curious. sometimes i wondered if i should better rewrite a function or an algorithm because it seemed too bad. next time i''ll try to check which one is the faster and go with the fast one
any suggestions on a profiler?
Now get down on your hands and knees and start repeating "Open Source Good, M$ Evil", smacking your head against the pavement after each repetition. Once you have completed your training you may change your first name to GNU/, to show that you are free from the slavery of the closed source world. -Michalson
It depends on what compiler you''re using. I believe VS Studio 6 comes with one (as long as you don''t have the learning edition) while with VS.Net you have to use an external one. For GCC (which is MinGW on windows, this is the compiler used by Dev-C++ and MinGW studio among others) you want to use GProff which should be in your compilers bin directory. AMD CodeAnalyst (which does work with non-AMD CPUs) is one you could use with VS.Net but IMO it ain''t really that great. I can''t think of any other free profilers for VS. Net at the moment but a couple of google searches will probably reveal more.
You can also do this for free if you use Windows :-)

GetTickCount() is a built in function that returns the time since windows has been started. Using this information, you can do this...

//include the windows header#include "windows.h"//this is basically what we're going to do://end_time - start_time = runtime//start the timerint start_time = GetTickCount();//insert code you want to time here//stop the timer and save the resultint runtime = GetTickCount() - start_time;//now you can type cast the runtime to print it out


[edited by - mrtuti_17 on June 5, 2004 10:15:31 AM]
quote:Original post by mrtuti_17
You can also do this for free if you use Windows :-)

GetTickCount() is a built in function that returns the time since windows has been started. Using this information, you can do this...
[edited by - mrtuti_17 on June 5, 2004 10:15:31 AM]


yeah, i was thinking about something like that. that will help me find the faster function or algorithm. since i own vs.net which i got from my university for free i wont switch to vc++ just because of the profiler. the gettick thing will have to do it.

thx monder i think i''ll go with the tick count thing because i dont want to waste money on a good profiler. i dont really need one. its just for my fun. expensive things arent funny
Now get down on your hands and knees and start repeating "Open Source Good, M$ Evil", smacking your head against the pavement after each repetition. Once you have completed your training you may change your first name to GNU/, to show that you are free from the slavery of the closed source world. -Michalson
Probably one of the most accurate ways to time something:

Loop through x times:

1. Serialize instruction excecution with CPUID.
2. Read time stamp counter with RDTSC and store it somewhere.
3. Excecute function or code segment once.
4. Serialize instruction excecution with CPUID.
5. Read time stamp counter again and compute the difference between the start value.

Then, find the LOWEST delta time and that is your approximate running time. This way you wont be counting random interrupts. Note that this is not in milliseconds, so just use it for comparing times on the same machine. This also is affected by multitasking, so leave as few processes running as possible. Instead of RDTSC, you can also use QueryPerformanceCounter/Frequency which has a much higher resolution than GetTickCount.
There is one thing I forgot to mention about using GetTickCount().

Since windows is not a real time operating system, you want to run the code more than once. For example, if I wanted to test two sorting algorithms, bubble sort and quick sort, i would tell them to sort 100,000 random integers in an array.

When performing the test, I wouldn''t do anything that would make the program run slower (opening programs, photoshopping, etc.). I would test the sorts multiple times and average my results in a spreadsheet before drawing any conclusions.

Maybe using a profiler would be quicker, but I don''t know anything about those, so I won''t say anything more.

This topic is closed to new replies.

Advertisement