Jump to content
  • Advertisement
Sign in to follow this  

Timer troubles (Specifically QueryPerformanceCounter)

This topic is 4394 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on a clock/timer system in C++. The system is something like this TimeSources (multiple possible, inheriting from an abstract base class) | V Clock (the true interface for the program. A call to FrameStep updates the internal timer and then updates each of the timers associated with the clock) | V Timers (individual timers that are updated each frame by the clock when they are registered with the clock.) Overall the system is working pretty good. Right now I have 2 different TimeSources defined. One that uses timeGetTime and one that uses QueryPerformanceCounter. In my tests, I start a timer and run it in a finite for loop, with each iteration calling FrameStep from the clock which queries the TimeSource to get the difference in Milliseconds (/Microseconds) from the last call. The value being returned is a 32-bit integer (I've been purposely trying to stay away from floating point values.). This next part is a bit odd, and I'll try to explain. In reality, I am using the 32-bit integer that is returned as a 24-bit number. I'm using some bit packing to track time, and I'm only storing a possible 24-bits from the milliseconds into a 32-bit number. The other 8-bits is being used to track seconds (this is a completely seperate entity that exists in the Timers. It is used to keep track of how many days, hours, minutes, seconds, and milliseconds the timer has been running). I know this seems overly convoluted. When I use the timeGetTime timer, the comparisons with with the system clock are almost deadon. The timer sometimes drifts a few milliseconds away from system time (and part of this is because of the lack of resolution from the system time), but it always seems to correct itself and get within an acceptable difference. When I use a QueryPerformanceCounter / QueryPerformanceFrequency, I have been finding that it doesn't quite keep up. It slowly lags behind, to the point where the gap between its values and the system timer gets seconds apart. And it manages to do this over a fairly quick period of time (usually in less than 2 minutes). The first code here is the one that uses timeGetTime()
//	Needs Winmm.lib

#ifndef	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSTIMESOURCEH
#define	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSTIMESOURCEH

#include "ITimeSource.h"
#include <windows.h>
#include "../../Exceptions/Exceptions.h"
#include "../../CrossPlatform/XDataTypes.h"

namespace KS
{
	namespace Common
	{
		namespace Misc
		{
			namespace Timer
			{
				class CWindowsTimeSource : public ITimeSource
				{
				public:
					CWindowsTimeSource() THROWS(KS::Common::Exceptions::CException);
					virtual ~CWindowsTimeSource() NOTHROW;
					u32 update() THROWS(KS::Common::Exceptions::CException);

				private:
					DWORD			m_intCurrent;
					u32				m_intDelta;
				};

				inline CWindowsTimeSource::CWindowsTimeSource() THROWS(KS::Common::Exceptions::CException)
				{
					if(timeBeginPeriod(1) != TIMERR_NOERROR)
					{
						throw KS::Common::Exceptions::CException(__FILE__, __LINE__, "PWindowsTimeSource::PWindowsTimeSource()", "timeBeginPeriod() failed");
					}

					m_intCurrent = timeGetTime();
				}

				inline CWindowsTimeSource::~CWindowsTimeSource() NOTHROW
				{
					timeEndPeriod(1);
				}

				u32 CWindowsTimeSource::update() THROWS(KS::Common::Exceptions::CException)
				{
					DWORD	intLast = m_intCurrent;

					m_intCurrent = timeGetTime();
					m_intDelta = m_intCurrent - intLast;
					m_intDelta *= 1000;

					return m_intDelta;
				}
			}
		}
	}
}

#endif	//	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSTIMESOURCEH



And the code that uses the PerformanceCounter
#ifndef	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSPERFORMANCETIMERTIMESOURCEH
#define	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSPERFORMANCETIMERTIMESOURCEH

#include "ITimeSource.h"
#include <windows.h>
#include "../../Exceptions/Exceptions.h"
#include "../../CrossPlatform/XDataTypes.h"

namespace KS
{
	namespace Common
	{
		namespace Misc
		{
			namespace Timer
			{
				class CWindowsPerformanceTimerTimeSource : public ITimeSource
				{
				public:
					CWindowsPerformanceTimerTimeSource() THROWS(KS::Common::Exceptions::CException);
					virtual ~CWindowsPerformanceTimerTimeSource() NOTHROW;
					u32 update() THROWS(KS::Common::Exceptions::CException);


				private:
					LARGE_INTEGER	m_intFrequency;
					LARGE_INTEGER	m_intCurrent;
					u32				m_intDelta;
				};

				inline CWindowsPerformanceTimerTimeSource::CWindowsPerformanceTimerTimeSource() THROWS(KS::Common::Exceptions::CException)
				{
					//	Do Some API Compensating / Calibrating
					if(QueryPerformanceFrequency(&m_intFrequency) == FALSE)
					{
						//	Might default to another system
						throw KS::Common::Exceptions::CException(__FILE__, __LINE__, "PWindowsTimeSource::PWindowsTimeSource()", "QueryPerformanceFrequency() failed");
					}

					if(QueryPerformanceCounter(&m_intCurrent) == FALSE)
					{
						//	Might default to another system
						throw KS::Common::Exceptions::CException(__FILE__, __LINE__, "PWindowsTimeSource::update()", "QueryPerformanceCounter() failed");
					}
				}

				inline CWindowsPerformanceTimerTimeSource::~CWindowsPerformanceTimerTimeSource() NOTHROW
				{
				}

				u32 CWindowsPerformanceTimerTimeSource::update() THROWS(KS::Common::Exceptions::CException)
				{
					LARGE_INTEGER	intLast = m_intCurrent;

					if(QueryPerformanceFrequency(&m_intFrequency) == FALSE)
					{
						//	Might default to another system
						throw KS::Common::Exceptions::CException(__FILE__, __LINE__, "PWindowsTimeSource::PWindowsTimeSource()", "QueryPerformanceFrequency() failed");
					}

					if(QueryPerformanceCounter(&m_intCurrent) == FALSE)
					{
						//	Might default to another system
						throw KS::Common::Exceptions::CException(__FILE__, __LINE__, "PWindowsTimeSource::update()", "QueryPerformanceCounter() failed");
					}

					u64 intDelta = (m_intCurrent.QuadPart - intLast.QuadPart);
					intDelta *= 1000000;
					m_intDelta = (static_cast<u32>(intDelta / m_intFrequency.QuadPart));

					return m_intDelta;
				}
			}
		}
	}
}

#endif	//	DEFINE_KS_COMMON_MISC_TIMER_TIMESOURCEPOLICIES_CWINDOWSPERFORMANCETIMERTIMESOURCEH



Now ultimately what they should be returning is a number less than 1,000,000. (Thus the reason I am using upto 24-bits). So we have support for upto Microseconds. I'm looking for ideas or flaws in the code. If I need to get a little more of the code posted, I'll try. I do have a dual core processor (AMD64 4400+ x2), but I don't have Quiet and Cool running and I have use SetProcessAffinityMask to only use one of the cores (as suggested by microsoft). I also have the XP timer hotfix installed as well as the x2 AMD driver. [edit] Also I am aware of the bug known for QueryPerformanceCounter is known to jump forward, but this would seem to be the opposite, since it is falling behind. KB274323 - Performance counter value may unexpectedly leap forward

Share this post


Link to post
Share on other sites
Advertisement
Is this a desktop computer or a laptop? Some laptops have "stepping" in which the processors frequency changes at runtime (power conservation I believe is the main reason ). So if your grabbing the frequency only one time and you have a stepping cpu I suppose that may account for part of the problem.

[edit]
As another thought...have you tried doing this timing test without your bit packing operations? Do you still have QPC falling behind even without the bit packing?

Share this post


Link to post
Share on other sites
As too the first, see the bottom of the post and the code. It is a dual core with Quiet and Cool disabled (so it doesn't slow itself down). Also in the code, it is getting the frequency along with every call to counter, which should compensate for speed stepping.

I will try to see if it helps without truncating it, though it shouldn't ever go above the million mark.

Share this post


Link to post
Share on other sites
This is a smaller version of the code I'm using to test this. I have eliminated the minutes and above. I don't expect the numbers to match exactly, but they should be relatively close


//This is to easily switch which one is being used during a compile.
//typedef KS::Common::Misc::Timer::CWindowsTimeSource PlatformTimeSource;
typedef KS::Common::Misc::Timer::CWindowsPerformanceTimerTimeSource PlatformTimeSource;

SYSTEMTIME sysTime;
GetLocalTime(&sysTime);
u32 intMilliseconds = sysTime.wMilliseconds * 1000;
u32 intSeconds = sysTime.wSecond;

for(int i = 0; i < 10000; ++i)
{
GetLocalTime(&sysTime);
intMilliseconds += timeSource.update();
while(intMilliseconds > 999999)
{
++intSeconds;
intMilliseconds -= 1000000;
}
while(intSeconds > 59)
{
intSeconds -= 60;
}
std::cout << "System Time: " << static_cast<u32>(sysTime.wSecond) << "." << (static_cast<u32>(sysTime.wMilliseconds) * 1000) << std::endl;
std::cout << "Timers: " << intSeconds << "." << intMilliseconds << std::endl;
}



It still drifts. Right now my only guess is I'm loosing extremely small fractions of seconds when doing the integer division (which is something that the timeGetTime() doesn't do, it is just subtraction).

Share this post


Link to post
Share on other sites
TFM says QueryPerformanceFrequency cannot change while the system is running. Even if the system is slewing the actual frequency to sync to some other clock, the nominal ("rated") frequency will never change.
It is unnecessary to call QPF more than once.

I suspect your drift problem is due to use of integer arithmetic. Division creates small remainders that you simply discard, which would cause the timer to eventually fall behind (especially if you are update()ing it often).
Why not use (FPU) double timestamps like everyone else? ;)

Share this post


Link to post
Share on other sites
QPC is bugged in windows on AMD duel cores. I don't know about all of them but check out this thread. I'm not sure how applicable this is to intel duel cores.

http://www.gamedev.net/community/forums/topic.asp?topic_id=357026

The hotfix posted there fixed all my QPC problems. Dunno if it's your problem but check it out.

Share this post


Link to post
Share on other sites
Because I need to be different :). Partically it is a performance issue, just from tests, I can definately see a speed improvement when using the integers only vs the floats or doubles. If I need to go down that road, I will. I'm just looking to see if there is an integer solution.

Share this post


Link to post
Share on other sites
Accumulating relative time might give you different result from reading an absolute timer. This is why you get small differences while using your timeGetTime() versions. The bigger differences in the QPC versions might come from the same problem

Side notes:
1) why do you use while() loops instead of the modulo operator?
2) why do you use multiple namespace if you are (in the end) nearly forced (because of the class name complexity) to typedef your class name into another one?
3) why don't your timer object handle time object instead of raw integer with a particular, non standard semantic?

Quote:
Original by Jan Wassenberg
Why not use (FPU) double timestamps like everyone else? ;)

There is less precision in a double than in an int64.

Regards,

Share this post


Link to post
Share on other sites
1) That's just crazy enough to work. :)
2) This is part of a larger project and I just cut and pasted most of the code.
3) That was just a quick test system, I do use a custom Time object to actually store the current time. It actually tracks days, hours, minutes, seconds, and milliseconds elapsed. I'm trying to keep the object as small as possible. I store days, hours and minutes in a 32 bit variable (16-bits for days, which still provides for about 1000 years worth of tracking, 8-bits for hours (since you only really need 24) and 8-bits for minutes (which only needs upto 60). Then I was hoping to use another 32-bit integer for seconds and milliseconds (and this is where the some of the problems are steming from). I use 8-bits for seconds (same as minutes, only need to track for 60). and then the remaing 24-bits for milliseconds, which provides for an integer form of over a million. The idea was it should provide enough resolution for single microseconds (1000000 units).

Share this post


Link to post
Share on other sites
Another note. I did test this on an older 1.2gHz Athlon. The Performance timer still has the drift problem, so it is definately not steming from an issue with the dual-core.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!