C++: Easiest way to implement an in-game Time/Date system?...

Started by
32 comments, last by Sean_Seanston 9 years, 3 months ago

Don't use GetTickCount, use QueryPerformanceCounter along with QueryPerformanceFrequency.

Advertisement

I'd never heard of those before, so I did some investigating and unsuccessfully tried to replace GetTickCount() with QPC()/QPF(). Then I eventually found this which actually deals with how to use them:

http://cplus.about.com/od/howtodothingsin1/a/timing.htm

That explains why division wasn't defined when I tried...

Anyways, am I right then in thinking that all I should have to do on a basic level is directly replace every call to GetTickCount() with the result of QPC().QuadPart/QPF().QuadPart?

Will try that and see what I get.

Ok, I've tried a few different things using QPC()/QPF() and come to a point where the seconds seem to be progressing quite smoothly in general, but there's still a definite, though probably significantly rarer than before, occasional time when a second visibly hangs for a moment before the next second shows up and goes by more quickly to compensate.

At this point I'm just wondering about the practical reality of what I'm trying to do. Should I consider it good enough to stick with the result I've described, or should this be straightforward enough that there's no reason not to go for proper consistency? It's just that looking at how the code works and the main loop is set up, I can't logically see why it wouldn't lag sometimes and I can't figure out how to come up with a solution that would have seconds reliably change every second to within 100ms or whatever might be undetectable.

The interpolation value seems to be what I'd expect. I'm displaying the time 1 frame behind, and using the interpolation value to try and make it go more smoothly by having it display the new value closer to the time it should. I'm using OutputDebugString() to output a number for each update frame, along with the total milliseconds to be added that frame, followed by each render frame with the amount of milliseconds to add to the displayed time for that render frame and the interpolation value that was used to derive that amount.

Here's a sample:


Update ID: 119
Time Difference:41
9.225 | INTERPOLATION: 0.225
18.45 | INTERPOLATION: 0.45
26.65 | INTERPOLATION: 0.65
36.9 | INTERPOLATION: 0.9

Update ID: 120
Time Difference:39
7.8 | INTERPOLATION: 0.2
22.425 | INTERPOLATION: 0.575
31.2 | INTERPOLATION: 0.8

Update ID: 121
Time Difference:37
4.625 | INTERPOLATION: 0.125
12.025 | INTERPOLATION: 0.325
19.425 | INTERPOLATION: 0.525
26.825 | INTERPOLATION: 0.725
36.075 | INTERPOLATION: 0.975

Update ID: 122
Time Difference:46
14.95 | INTERPOLATION: 0.325
24.15 | INTERPOLATION: 0.525
35.65 | INTERPOLATION: 0.775
46 | INTERPOLATION: 1

As you can see, we have 1 case where there are 5 renderings between 2 update frames, and another where there's only 3. Am I right to assume that a situation like that (which, if I'm not wrong, seems to be unavoidable if using the DeWitters game loop) basically makes it impossible to display a consistent and accurate second-by-second count by its very nature?

If so, with this getting very complicated... do most developers not aim for such accuracy at all? Could something like multithreading help?

I suppose frame rate dips etc. are unavoidable and expected... in which case, am I making too much of a big deal about something that doesn't matter? Maybe it's just that I'm surprised seeing it at a stage when my "game" currently consists of nothing more than an image and some text overlaid with the time.


I've tried a few different things using QPC()/QPF() and come to a point where the seconds seem to be progressing quite smoothly in general, but there's still a definite, though probably significantly rarer than before, occasional time when a second visibly hangs for a moment before the next second shows up and goes by more quickly to compensate.

Scrolling back up to my earlier wall of text, use the results of QPC as a stopwatch. Tiny error in individual frames will work itself out over a larger time frame. Don't try to convert it back to milliseconds or do fancy division or averaging, just leave it at the high performance clock values.

The only time you really need to stop that clock is when the elapsed time is obviously out of bounds. Usually that happens when you stop the game in the debugger or the system otherwise has stalled. That is easy enough to handle as a special case, along the lines of


elapsedTime = GetTimeSinceLastUpdateWithQPC();
if(elapsedTime > WAY_TOO_LONG) {
    elapsedTime = DEFAULT_TIMESTEP;
    Logger::Log( Simulator, Logger::Info, "Excessive time between updates exceeded, possibly from an OS stall or a debugger. Resuming with a sane time step.");
}
accumulatedTime += elapsedTime;
while(accumlatedTime >= SIMULATOR_FREQUENCY)
{
  simulator.runOneStep();
  accumlatedTime -= SIMULATOR_FREQUENCY;
}

Without that kind of protection you quickly fall into an update death spiral. One update takes more time than you allow, and when you come through the next time two updates need to run to catch up. Then because you spent more time it needs to run three, then four, then soon the game is doing nothing but trying to catch up.

Since I'm sure you have followed the route mentioned above to decouple your rendering loop from your simulation loop happy.png it doesn't matter if the renderer has problems since you can simulate multiple times in a row, and if your simulator is running slow your renderer can interpolate between the two values. Fixing your time step is an important part of having a reliable simulation.

Just to follow on from what frob said: If you have a 'pause' facility (maybe when opening a menu) and you get excessive amounts of time between updates, such as going to the debugger, a really neat trick is just to pretend the game was paused - you'll just carry on exactly where you left off.

Just an hint: do NOT use <chrono> on Windows for an high precision event timer (or use it after VS2015 will be out): http://blogs.msdn.com/b/chuckw/archive/2014/12/03/understanding-game-time-revisited.aspx

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

Scrolling back up to my earlier wall of text, use the results of QPC as a stopwatch. Tiny error in individual frames will work itself out over a larger time frame. Don't try to convert it back to milliseconds or do fancy division or averaging, just leave it at the high performance clock values.

K, so in other words... the occasional stutter is probably ok and in line with a reasonable implementation as long as the timing averages out?

I've tested it a few times myself against my wristwatch, and it always does seem to be in line with what I'd expect. i.e. I try to start my watch when the game timer is at 5 seconds, and when my watch says 1 minute, the game says 1:05. Quicker or slower seconds sometimes seem to happen more or less often, but I think someone would generally have to be staring at the clock waiting for it to happen in order to be likely to notice it.

As for converting to milliseconds etc...

I currently add the time difference to a boost::posix_time::ptime object in milliseconds, so my function for getting ticks looks like this:


	LARGE_INTEGER count;
	QueryPerformanceCounter( &count );

	LARGE_INTEGER freq;
	QueryPerformanceFrequency( &freq );

	return ( LONGLONG( 1000 ) * count.QuadPart )/freq.QuadPart;

Then I add the result of the difference between 2 readings, multiplied by interpolation, to the displayed time. There's nothing wrong with that, is there?

The only time you really need to stop that clock is when the elapsed time is obviously out of bounds. Usually that happens when you stop the game in the debugger or the system otherwise has stalled. That is easy enough to handle as a special case, along the lines of

elapsedTime = GetTimeSinceLastUpdateWithQPC();
if(elapsedTime > WAY_TOO_LONG) {
    elapsedTime = DEFAULT_TIMESTEP;
    Logger::Log( Simulator, Logger::Info, "Excessive time between updates exceeded, possibly from an OS stall or a debugger. Resuming with a sane time step.");
}
accumulatedTime += elapsedTime;
while(accumlatedTime >= SIMULATOR_FREQUENCY)
{
  simulator.runOneStep();
  accumlatedTime -= SIMULATOR_FREQUENCY;
}

Without that kind of protection you quickly fall into an update death spiral. One update takes more time than you allow, and when you come through the next time two updates need to run to catch up. Then because you spent more time it needs to run three, then four, then soon the game is doing nothing but trying to catch up.

Good point... I'll be sure to put that in as soon as I clean up the code and implement time progression "properly".

As for converting to milliseconds etc...
I currently add the time difference to a boost::posix_time::ptime object in milliseconds, so my function for getting ticks looks like this:

	LARGE_INTEGER count;
	QueryPerformanceCounter( &count );

	LARGE_INTEGER freq;
	QueryPerformanceFrequency( &freq );

	return ( LONGLONG( 1000 ) * count.QuadPart )/freq.QuadPart;
Then I add the result of the difference between 2 readings, multiplied by interpolation, to the displayed time. There's nothing wrong with that, is there?

That is so wrong. The multiplication could overflow and you are wasting precision by converting too early.
Always keep the ticks as long as possible, subtract 2 tick values to get the time difference, then only on last moment before using the value for drawing convert it to milliseconds.

K... I hadn't thought about it overflowing either... I'll try to clean things up.

Seems I ought to wait until the last moment to divide by the frequency too, if googling is anything to go by... and how it didn't work like I expected when I made a quick and lazy attempt to change it.

Well, the best I could come up with that actually worked is the following:


timeEnd = timeStart;
timeStart = getQPC();

LONGLONG timeDiff = ( ( (  timeStart - timeEnd ) * LONGLONG( 1000 ) )/getQPCFreq() );

Where timeEnd and timeStart are LONGLONG variables, getQPC() returns the straight result of QueryPerformanceCounter(), and getQPCFreq() returns the frequency.

This way I'm getting the difference between the QPC ticks BEFORE multiplying by 1000 to convert to milliseconds and dividing by the frequency... so I guess that would fix the danger of overflowing since the result of 1000 times the difference should easily be small enough to fit in a LONGLONG?

As for in the main loop... where QPC() is being used to determine update framerate and interpolation... the problem is that time elapsed has to be in the same units as the loop variables e.g. for adding SKIP_TICKS to next_game_tick. Currently that means I'm multiplying the result of QPC() by 1000 and dividing by QPF(), leading to that risk of overflow.

So I assume if I could convert the value of SKIP_TICKS, in this case 40, to some format that would result in effectively adding 40 milliseconds to the QPC value, that it would all work out ok?

Is that as simple as finding X where:

( X * 1000 ) / QPFreq() == 40

?

i.e...

X == ( SKIP_TICKS * QPFreq() ) / 1000

Should give me the right value every time?

I'll go check if that works...

This topic is closed to new replies.

Advertisement