__int64 start_count;
__int64 end_count;
__int64 freq;
// Get the frequency and save it, it shouldn't change
QueryPerformanceFrequency((LARGE_INTEGER*)&freq);
QueryPerformanceCounter((LARGE_INTEGER*)&start_count);
// do some stuff that takes up time
QueryPerformanceCounter((LARGE_INTEGER*)&end_count);
//find the time
float time = (float)(end_count - start_count) / (float)freq;
what are all ways QueryPerformanceTimer can go wrong?
Hello all -
As another effort to streamline my game before I release my first demo for alpha-testing, I've decided to refine my fps-determining code. Until now I have been using GetTickCount() because of its simplicity and its flexibility (back when I did research on timers, I think the consensus was that this was the safest of the timers.)
But GetTickCount has an extremely low resolution, so low that my fps determining code is unable to record time at frame(n) and compare it to frame(n+1) because the difference between these two values usually turns out to be zero. Instead, I keep track of every time that 1000 ticks goes by and then determine the fps by how many frames have been completed since the last time 1000 ticks were completed.
Because all motion in my game is factored by the fps, the result of this is that frame-rate transitions result in momentary shudders where all objects will suddenly speed up or slow down.
This is very annoying and looks really bad.
In my research on timers I found out that QueryPerformanceTimer() is by far the best timer to use. However, I also learned that it is the most error-prone and difficult to use. So I've decided to incorporate both this and GetTickCount into my game. So basically what I'm wondering is:
- How do I know when QueryPerformanceTimer() is doing something screwy and I need to fall back on GetTickCount()'s results instead?
Also, does QueryPerformanceTimer() require a lot of initialization/housekeeping code to keep it from bugging out on different systems? For example, is the following code (quoted from a post by Draigan) sufficient for the use of this timer across all systems?
Finally, are there any system-specific errors that QueryPerformanceTimer() can generate that I should know about (ie, I remember a rumor that this function performs terribly on laptops - could anyone confirm this?) ?
Thanks very much for any help! (I apologize for having asked yet another 'which timer' question!)
-synth_cat
i believe timeGetTime is the standard method used in game eg quake3 unreal tornement
not GetTickCount u will also need to call timebeginperiod(1) (or something first though)
also u should average your fps over a few frames to smooth out spikes and troughs to make the game seem smoother
not GetTickCount u will also need to call timebeginperiod(1) (or something first though)
also u should average your fps over a few frames to smooth out spikes and troughs to make the game seem smoother
There's one rather recent "problem" with QueryPerformanceTimer. If you have more than one core and your game runs a bit on one processor, than the other, the values will vary wildly.
To remedy this:
This forces the current thread to stay on one core thus keeping QueryPerformanceTimer stable.
To remedy this:
DWORD dwProcessAffinityMask, dwSysAffinityMask, dwFirstProcessorMask = 1; GetProcessAffinityMask( GetCurrentProcess(), &dwProcessAffinityMask, &dwSysAffinityMask ); while( !( dwFirstProcessorMask & dwSysAffinityMask ) ) { dwFirstProcessorMask <<= 1; } SetThreadAffinityMask( GetCurrentThread(), dwFirstProcessorMask );
This forces the current thread to stay on one core thus keeping QueryPerformanceTimer stable.
On some motherboards, QueryPerformanceTimer is known to return ridiculous numbers periodically. IIRC Unreal Tournament alleviated this by comparing the value returned by QueryPerformanceTimer() with the value returned by GetTickCount(), if the values were not relatively equal, than it used the GetTickCount() value and assumed that QueryPerformanceTimer() screwed up for that particular frame.
I believe (Not 100% sure) that QPC can go a bit screwy and returns results that are several seconds wrong. As Thevenin said, using GetTickCount() to compare is reasonable, and if the result is way out (> 100ms or so), assume GetTickCount() is correct.
I seem to recall that someone here had a nice robust timer class that did all this internally, but I can't seem to find the post just now. Anyone know who/where it is?
I seem to recall that someone here had a nice robust timer class that did all this internally, but I can't seem to find the post just now. Anyone know who/where it is?
QueryPerformanceCounter() returns the number of cpu clock cycles since startup or last overflow.
The problems:
-the cpu clock frequency can dynamic (some laptops can change cpu speed)
-with multiple cores, the call will return the counter of the current core, not a global one
-some cpus have dynamic clock skip features (used for thermal throttling)
-most emulators never return truly correct results (incl. virtualization hardware)
The basic assumption can be that the counter will increast monotonously, with varying speed. The current speed can be queried but that doesn't mean the cpu didn't change speeds multiple times between two consecutive calls.
Imho, the best bet is to use a real time clock based counter (GetTickCount()) or syncronize to a clocked constant output device. (ms uses the sound card's fixed frequency output)
The problems:
-the cpu clock frequency can dynamic (some laptops can change cpu speed)
-with multiple cores, the call will return the counter of the current core, not a global one
-some cpus have dynamic clock skip features (used for thermal throttling)
-most emulators never return truly correct results (incl. virtualization hardware)
The basic assumption can be that the counter will increast monotonously, with varying speed. The current speed can be queried but that doesn't mean the cpu didn't change speeds multiple times between two consecutive calls.
Imho, the best bet is to use a real time clock based counter (GetTickCount()) or syncronize to a clocked constant output device. (ms uses the sound card's fixed frequency output)
Quote:Original post by zedzeek
i believe timeGetTime is the standard method used in game eg quake3 unreal tornement
not GetTickCount u will also need to call timebeginperiod(1) (or something first though)
I'd agree. My current game on the Showcase (plug [smile]) uses timeBeginPeriod(1) at the start, then timeGetTime() throughout to calculate time elapsed since last frame. That is more than accurate enough and avoids all the QueryPerformanceCounter woes.
I believe it is important to call timeEndPeriod(1) before the application exits though, as per MSDN.
Whether this timer is accurate enough for commercial games, I don't know, but by calling timeBeginPeriod(1), the accuracy of timeGetTime() increased by an enormous amount.
Works well enough for me, anyway and I don't think it suffers from any of the QPC problems so frequently reported recently.
Thanks for all the help!
I'm beginning to feel like I should just stick with GetTickCount(). I'm not quite sure what to make of timeGetTime() - from what I've heard it turns out to be pretty much the same as GetTickCount().
-synth_cat
I'm beginning to feel like I should just stick with GetTickCount(). I'm not quite sure what to make of timeGetTime() - from what I've heard it turns out to be pretty much the same as GetTickCount().
-synth_cat
According to AMD and MS, these timing issues are only supposed to affect RDTSC calls that are made directly. AMD and MS recommed using QPC, however, some people are still seeing issues. These can apparently be resolved with a processor driver update.
See more info here:
http://developer.amd.com/assets/TSC_Dual-Core_Utility.pdf
See more info here:
http://developer.amd.com/assets/TSC_Dual-Core_Utility.pdf
Honestly, the issues with QPC is not that big of a deal. Like others have said check it against GetTickCount (like I do) to verify it. Otherwise, set the thread affinity and be done with it. QPC is exponentially more accurate and your code (physics, AI, animation, etc) will greatly benefit from it. On other systems, like Linux use timeGetTime() as it is very accurate there. The time to code this should be measured in minutes not hours. Even with timeGetTime() set to 1, its still only 1ms accuracy.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement