Sign in to follow this  
synth_cat

what are all ways QueryPerformanceTimer can go wrong?

Recommended Posts

synth_cat    306
Hello all - As another effort to streamline my game before I release my first demo for alpha-testing, I've decided to refine my fps-determining code. Until now I have been using GetTickCount() because of its simplicity and its flexibility (back when I did research on timers, I think the consensus was that this was the safest of the timers.) But GetTickCount has an extremely low resolution, so low that my fps determining code is unable to record time at frame(n) and compare it to frame(n+1) because the difference between these two values usually turns out to be zero. Instead, I keep track of every time that 1000 ticks goes by and then determine the fps by how many frames have been completed since the last time 1000 ticks were completed. Because all motion in my game is factored by the fps, the result of this is that frame-rate transitions result in momentary shudders where all objects will suddenly speed up or slow down. This is very annoying and looks really bad. In my research on timers I found out that QueryPerformanceTimer() is by far the best timer to use. However, I also learned that it is the most error-prone and difficult to use. So I've decided to incorporate both this and GetTickCount into my game. So basically what I'm wondering is: - How do I know when QueryPerformanceTimer() is doing something screwy and I need to fall back on GetTickCount()'s results instead? Also, does QueryPerformanceTimer() require a lot of initialization/housekeeping code to keep it from bugging out on different systems? For example, is the following code (quoted from a post by Draigan) sufficient for the use of this timer across all systems?
__int64 start_count;
__int64 end_count;
__int64 freq;

// Get the frequency and save it, it shouldn't change
QueryPerformanceFrequency((LARGE_INTEGER*)&freq);
QueryPerformanceCounter((LARGE_INTEGER*)&start_count);
// do some stuff that takes up time
QueryPerformanceCounter((LARGE_INTEGER*)&end_count);
//find the time
float time = (float)(end_count - start_count) / (float)freq;


Finally, are there any system-specific errors that QueryPerformanceTimer() can generate that I should know about (ie, I remember a rumor that this function performs terribly on laptops - could anyone confirm this?) ? Thanks very much for any help! (I apologize for having asked yet another 'which timer' question!) -synth_cat

Share this post


Link to post
Share on other sites
zedzeek    528
i believe timeGetTime is the standard method used in game eg quake3 unreal tornement
not GetTickCount u will also need to call timebeginperiod(1) (or something first though)

also u should average your fps over a few frames to smooth out spikes and troughs to make the game seem smoother

Share this post


Link to post
Share on other sites
Endurion    5408
There's one rather recent "problem" with QueryPerformanceTimer. If you have more than one core and your game runs a bit on one processor, than the other, the values will vary wildly.

To remedy this:


DWORD dwProcessAffinityMask,
dwSysAffinityMask,
dwFirstProcessorMask = 1;

GetProcessAffinityMask( GetCurrentProcess(), &dwProcessAffinityMask, &dwSysAffinityMask );
while( !( dwFirstProcessorMask & dwSysAffinityMask ) )
{
dwFirstProcessorMask <<= 1;
}
SetThreadAffinityMask( GetCurrentThread(), dwFirstProcessorMask );


This forces the current thread to stay on one core thus keeping QueryPerformanceTimer stable.

Share this post


Link to post
Share on other sites
Thevenin    270
On some motherboards, QueryPerformanceTimer is known to return ridiculous numbers periodically. IIRC Unreal Tournament alleviated this by comparing the value returned by QueryPerformanceTimer() with the value returned by GetTickCount(), if the values were not relatively equal, than it used the GetTickCount() value and assumed that QueryPerformanceTimer() screwed up for that particular frame.

Share this post


Link to post
Share on other sites
Evil Steve    2017
I believe (Not 100% sure) that QPC can go a bit screwy and returns results that are several seconds wrong. As Thevenin said, using GetTickCount() to compare is reasonable, and if the result is way out (> 100ms or so), assume GetTickCount() is correct.

I seem to recall that someone here had a nice robust timer class that did all this internally, but I can't seem to find the post just now. Anyone know who/where it is?

Share this post


Link to post
Share on other sites
kvp    196
QueryPerformanceCounter() returns the number of cpu clock cycles since startup or last overflow.

The problems:
-the cpu clock frequency can dynamic (some laptops can change cpu speed)
-with multiple cores, the call will return the counter of the current core, not a global one
-some cpus have dynamic clock skip features (used for thermal throttling)
-most emulators never return truly correct results (incl. virtualization hardware)

The basic assumption can be that the counter will increast monotonously, with varying speed. The current speed can be queried but that doesn't mean the cpu didn't change speeds multiple times between two consecutive calls.

Imho, the best bet is to use a real time clock based counter (GetTickCount()) or syncronize to a clocked constant output device. (ms uses the sound card's fixed frequency output)

Share this post


Link to post
Share on other sites
Aardvajk    13207
Quote:
Original post by zedzeek
i believe timeGetTime is the standard method used in game eg quake3 unreal tornement
not GetTickCount u will also need to call timebeginperiod(1) (or something first though)


I'd agree. My current game on the Showcase (plug [smile]) uses timeBeginPeriod(1) at the start, then timeGetTime() throughout to calculate time elapsed since last frame. That is more than accurate enough and avoids all the QueryPerformanceCounter woes.

I believe it is important to call timeEndPeriod(1) before the application exits though, as per MSDN.

Whether this timer is accurate enough for commercial games, I don't know, but by calling timeBeginPeriod(1), the accuracy of timeGetTime() increased by an enormous amount.

Works well enough for me, anyway and I don't think it suffers from any of the QPC problems so frequently reported recently.

Share this post


Link to post
Share on other sites
synth_cat    306
Thanks for all the help!

I'm beginning to feel like I should just stick with GetTickCount(). I'm not quite sure what to make of timeGetTime() - from what I've heard it turns out to be pretty much the same as GetTickCount().

-synth_cat

Share this post


Link to post
Share on other sites
crowley9    226
According to AMD and MS, these timing issues are only supposed to affect RDTSC calls that are made directly. AMD and MS recommed using QPC, however, some people are still seeing issues. These can apparently be resolved with a processor driver update.

See more info here:
http://developer.amd.com/assets/TSC_Dual-Core_Utility.pdf

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
Honestly, the issues with QPC is not that big of a deal. Like others have said check it against GetTickCount (like I do) to verify it. Otherwise, set the thread affinity and be done with it. QPC is exponentially more accurate and your code (physics, AI, animation, etc) will greatly benefit from it. On other systems, like Linux use timeGetTime() as it is very accurate there. The time to code this should be measured in minutes not hours. Even with timeGetTime() set to 1, its still only 1ms accuracy.

Share this post


Link to post
Share on other sites
Evil Steve    2017
Quote:
Original post by crowley9
According to AMD and MS, these timing issues are only supposed to affect RDTSC calls that are made directly. AMD and MS recommed using QPC, however, some people are still seeing issues. These can apparently be resolved with a processor driver update.

See more info here:
http://developer.amd.com/assets/TSC_Dual-Core_Utility.pdf
Unfortunately, end users are unlikely to have the driver installed, and when they see your game twitching like crazy, they'll blame it on you... That's the exact reason I refuse to install it - I want to see what happens to my code when it's run on a PC without the driver.

Share this post


Link to post
Share on other sites
Zahlman    1682
... Damn, this sounds like just the sort of thing where Boost would give you a hand up, but it looks like boost::timer always wraps std::clock() which is not terribly good :(

Share this post


Link to post
Share on other sites
Kest    547
You can completely customize timeGetTime() timers as well. Here's one with a callback for a 1 millisecond delay and the highest resolution possible (normally 1 millisecond):

// Some variables to put somewhere:

MMRESULT timer_id = 0;
TIMECAPS timer_caps;
unsigned int volatile GlobalInternalTimerValue = 0;

// Timer init:

if( timeGetDevCaps(&timer_caps,sizeof(TIMECAPS)) != TIMERR_NOERROR)
return Error("Unable to retrieve timing capabilities");
if( timeBeginPeriod(timer_caps.wPeriodMin) != TIMERR_NOERROR )
return Error("Unable to set timer resolution" );

timer_id = timeSetEvent(1,timer_caps.wPeriodMin,timer_func,0,TIME_PERIODIC);
if( timer_id == 0 )
return Error("Unable to create timer");

// Timer callback:

void CALLBACK timer_func(unsigned int,unsigned int,DWORD,DWORD,DWORD)
{
GlobalInternalTimerValue++;
}

// Timer release:

unsigned int hr = timeEndPeriod( timer_caps.wPeriodMin );
// make sure hr == TIMERR_NOERROR
hr = timeKillEvent( timer_id );
// make sure hr == TIMERR_NOERROR




Like timeGetTime(), you need to use mmsystem.

I've been using this same timer since way back (nine years I think) when I did my first game programming on a 32MB 233mhz laptop. It worked flawlessly then, and it still does so.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this