Sign in to follow this  
anupgupta

Timer Functions

Recommended Posts

anupgupta    128
Hello everybody I am currently working on a 2D side scrolling game...using C++ and opengl... I would like to know as to what can i use for getting time in milliseconds...and if i make a call to the timer function it should always start sequentially i.e from zero and then progress...please advice on this... i always want my timer to start from 0 and then increment...

Share this post


Link to post
Share on other sites
dave    2187
On windows you can call GetTickCount() to get the time in milliseconds. Typically at the start of the application you cache the current value of GetTickCount() and then for each successive frame you have a last point in time to refer to. Then yo do your logic and then update the last point in time to the current one, rinse and repeat.

If you want a more high resolution timer you can use QueryPerformanceCounter and QueryPerformanceFrequency methods.

Share this post


Link to post
Share on other sites
Please do note that GetTickCount() will have a resolution of around 15 ms (or a different value on older Windows versions). If that is enough for you, then by all means go with it, as it is the fastest and easiest way to get hold of "time".

The resolution issue is true in principle for timeGetTime(), but you can use timeBeginPeriod() to give it a real millisecond resolution (and for all I can tell, more or less perfectly accurate). timeGetTime() is slightly slower than GetTickCount(), but not painfully slow. If you need 1 ms resolution, use this.

QueryPerformanceCounter() is total shit, unless you only target Windows Vista or Windows 7 on the newest line of processors, since it will use RDTSC in all other cases. Under Vista/7, with last generation processors, admittedly, QueryPerformanceCounter() is ok, as it uses reliable hardware timers. But well, you usually don't know what OS/hardware your program runs on.
RDTSC does not count time, but cycles, which is unreliable, and possibly non-monotonic. On multi-core machines, you may get significantly different results depending on what core your code runs on, so you may appear to jump back in time. On speed-stepped machines (all laptops and the vast majority of desktop machines during the last couple of years), there is no real relation between the counter and time, either. A CPU might run at 2.2 GHz this second, and 0.55 GHz the next second.


EDIT: interesting article on the subject

Share this post


Link to post
Share on other sites
Wavarian    850
Quote:
Original post by anupgupta
a call to the timer function it should always start sequentially i.e from zero and then progress...


I thought it'd be fun to try the 'clock' function:


#include <ctime>
#include <iostream>

//
// Returns the number of milliseconds elapsed since this function was last called
//

unsigned int GetTimeElapsed(bool reset)
{
static clock_t START_CLOCK = clock();

clock_t stop_clock = clock();

unsigned int time_elapsed = ((stop_clock - START_CLOCK) * 1000) / CLOCKS_PER_SEC;

if(reset)
{
START_CLOCK = stop_clock;
}

return time_elapsed;
}

//
// Example usage
//

int main()
{
// Call the function once to get it started - should return 0
unsigned int ms = GetTimeElapsed(true);

// ...

// Get the amount of time elapsed
ms = GetTimeElapsed(true);

std::cout << "Time elapsed: " << ms << " ms\n";
return 0;
}



EDIT: Fixed

Share this post


Link to post
Share on other sites
Ariste    296
DXUT (the DirectX application framework) includes a timer class which, AFAIK, you can just rip out wholesale and use for yourself if you're interested.

Share this post


Link to post
Share on other sites
LessBread    1415
Also consider timeGetTime or QueryPerformanceCounter

timeGetTime Returns the system time, in milliseconds. DWORD timeGetTime(void); Windows NT/2000 default precision of the timeGetTime function can be five milliseconds or more, depending on the machine.

// "i always want my timer to start from 0 and then increment"

DWORD tick = 0;
DWORD timeZeroOffset = timeGetTime();
// ...
DWORD timeN = timeGetTime();

timeN -= timeZeroOffset;

tick++;





Share this post


Link to post
Share on other sites
Steve132    433
I did an experiment a long time ago trying to find out which was the best timer. The winner by FAR was the timing function implemented in OpenMP. Highest resolution (around 50 ns!) and consistently accurate. Furthermore, it worked on multiple cores inherently becuase its a part of OpenMP.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this