Sign in to follow this  

C++: Easiest way to implement an in-game Time/Date system?...

This topic is 1063 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Right now I want to put a time system in a game that will allow me to implement a day/night cycle as well as keep track of in-game days, years etc. passing by. Probably something like the game starts in a near future year, then maybe a real-life second is an in-game minute or some such, and time acceleration allows it to progress faster, with accurate months/years/weekdays etc. being displayed.

 

I'm pretty sure I've used something quite simple like this in Java... but this is definitely one of those areas where C++ is less immediately accessible...

 

I've looked around and I see Boost has a date/time library, though there's also ctime.h which allows dates in some way, but I suspect it's probably not adequate for my purposes.

 

TL;DR: Is using Boost pretty much the most practical way to implement an in-game time/date system?

 

I have used Boost before for a few things, but it might take some time to wade through the unnecessary stuff to find what I really need, so I thought I'd ask before delving in. Some of the examples also refer to a header file that doesn't exist in the version of Boost that came with the Unofficial OpenGL SDK for some reason... but I'll see if I can get around that...

Share this post


Link to post
Share on other sites

C++ chrono library (part of the standard)

 

Interesting... I didn't know about that. Though when I do "#include <chrono>" it says it can't find it... it is something new with C++11 by any chance? I'm still using MSVC++ 2008 Express on this old computer...

 

Seems you'd still have to use ctime to get date functionality, if I've read that page correctly. That seems to mostly be useful for telling the current date. Maybe it could be offset somehow to tell a future date, but I feel it might become fiddly and probably a more complex system would be in order ideally...

 

As for boost, I tried to use the boost::gregorian::date class, and while it included ok and intellisense picked it up... I get an error when I try to run/compile it:

1>LINK : fatal error LNK1104: cannot open file 'libboost_date_time-vc90-mt-gd-1_53.lib'

 

Hmmmmm. There isn't such a file in the boost folder that came with glsdk where the include files are

Also hate how the documentation for the class doesn't appear to list all the required includes/libs etc.:

http://www.boost.org/doc/libs/1_57_0/doc/html/date_time.html

(If indeed it's normal that I need that lib file at all... perhaps something else is wrong somewhere else...)

 

I'll go see if I can find more information on how boost's date_time works.

 

EDIT: And actually, now that I think about it, I almost certainly will need something rather sophisticated because my idea is a strategy game involving a large amount of traveling between various places in the world. So I may need some degree of time zone support, depending on how I eventually decide to actually implement time. A fictional near-future setting could go far towards simplifying things for the sake of both development and gameplay convenience, so it wouldn't have to follow the real-world necessarily, but it would definitely be good to have the flexibility for later on.

Edited by Sean_Seanston

Share this post


Link to post
Share on other sites

As for boost, I tried to use the boost::gregorian::date class, and while it included ok and intellisense picked it up... I get an error when I try to run/compile it:
1>LINK : fatal error LNK1104: cannot open file 'libboost_date_time-vc90-mt-gd-1_53.lib'


Not all of Boost is header only. You'll have to build some of Boost's libraries. The date_time library is one such. The Boost documentation has information on how to build it with their custom-ish build system.

 

Found some information about that here...

http://www.boost.org/doc/libs/1_57_0/more/getting_started/windows.html

 

Though it says date_time's binary component is only needed "if you're using its to_string/from_string or serialization features, or if you're targeting Visual C++ 6.x or Borland."... I guess the Gregorian stuff must use to/from string functionality somewhere and not work at all if you don't use the binary.

 

That build system thingy... is that the bcp program I've heard about? Apparently it tells you everything a library needs by copying it into a folder or some such... which should be useful.

 

Time zones just piss people off in real life. Ask yourself very hard if they'll add to the fun or hinder it. If you're aiming hard for simulation over, ask yourself whether any piece of complexity will actually make a different in the simulation to the point that the user will notice it and appreciate its addition. Personally, I'd think time zones or the like would be an anti-feature and unnecessary complexity.

 

I think it's something worth experimenting with in the context of what I have in mind. And some very full-featured library like presumably boost date_time would make that kind of experimentation a lot easier. I may settle on some kind of universal time (could even be shoehorned in as a plot point), or just a few time zones implemented in an unintrusive way that merely serves to make things more intuitive, e.g. so it isn't dark where a character is when the only clock reads 1pm.

Could be madness too, but I think only iteration can tell me for sure. Definitely not going to invest too much time into it or make it some kind of core mechanic where the player has to do complicated time maths in their head or anything, but some things may not make sense otherwise and it may add to the strategy.

Share this post


Link to post
Share on other sites
Your game time shouldn't really be coupled in any direct way to real life time unless you're writing a game like animal crossing or something, where the basis of the game is using real life time. But that would require different code to account for.

In general you'll end up thinking of games having different timelines and all your internal code will do is pass around delta time. At a simple level think of time passing in your game, you want every second that passes of real time to increment an hour of your ingame clock, but if you pause your game then the times will always become desynched.

Usually you just want to take a delta time at an iteration of your game loop and then use that as the basis to advance your game time. I.e. assume when you take the time at 15 ms since your program started and it has been 5 ms since the last loop iteration, you essentially "pretend" all your game logic is now happening at the 15 ms mark, even though the code will actually take real time to complete.

The important part comes down to how and when you pass that time to your internal code and if you manipulate it at all, thats how you can have effects like making a game run in slow motion, by distorting the "real life" delta time.

Share this post


Link to post
Share on other sites


...  maybe a real-life second is an in-game minute or some such, and time acceleration allows it to progress faster...
Is it just me or posts here are assuming the in-game time  will increase at a more or less constant rate?

 

If you plan to run the simulation for a lot of time with "time scale" changes, I suggest against using a single reference point in time. In the past I've had some nasty things with accumulation errors and I cannot be completely sure the new frameworks solved them. Keep around a list of "time speed" changes and reset your time reference to last time multiplier change.

Share this post


Link to post
Share on other sites

Ok, well I've been trying to get my head around actually using boost::posix_time and while I'm not 100% clear on various best practices, I think I know enough to at least botch together something reasonable once I have the right idea about the game logic.

 

Here's my current critical issue:

 

- I've been able to get a time and date displayed, and I can seemingly set it to whatever time/date I want and progress it by so many hours/days etc. without too much trouble (hopefully date problems don't arise later on but I'll deal with that then...). I even got the seconds ticking forward like it was an actual clock,

HOWEVER... while I've timed it and it seemed to me that e.g. 10 seconds was indeed taking about 10 seconds to pass, the rate at which a second was added to the timer wasn't always constant. Sometimes a number was clearly displaying for longer than others, then soon after a number would go by more quickly and compensate. Always seemed to average out alright but it looked ugly and distracting.

 

I've messed around but I haven't been able to fix it. I've looked around at my game loop and thought about how many times the logic update() and render() functions are being called, and the ratios between them and blah blah but I seem to be stumped at this rather basic problem. I guess it doesn't have to be EXACT and it does seem to average out to the right time, but I assume the human eye is going to notice differences of what might be 250 - 500 ms or more between ticks.

My game loop is based around the one in this well-known article here:

http://www.koonsolo.com/news/dewitters-gameloop/

(With the same 25 ticks per second, 40 skip ticks and max frameskip of 5 used in the example)

 

I guess the problem is that the gap between every logical frame will likely be different, and the timer is presumably only showing the increase of whole seconds, so one time it might do something like... 1946ms + 54ms = 2 seconds, then perhaps go 22 ticks without quite going over the 3000ms threshold, before doing 2994ms + 43ms = 3037ms, thereby making each second come up at a different rate.

Rendering of game objects is solved by using the interpolation value to draw between 2 logical frames, but if that's the obvious proper solution here then I haven't been able to get it to work... but I may have missed something or it could be I'm misusing the library but I think I'm using it ok if a little sloppily.

 

What's the obvious method I'm missing here? It must be simple in theory to display a clock with apparently perfect consistency. Or is 25 updates per second just not enough? I suspect it would be plenty to at least be imperceptible to the human eye.

 

Also: I just did some testing with OutputDebugString(). I notice I often get 3 or 4 rendering passes in a row with the same interpolation value. Is that normal...? Then again, maybe when GetTickCount() is being added to another value and having the result divided, that's perfectly expected since the effect of a few small ticks on such a result might be minimal.

Just poking around trying to understand better what's going on...

Share this post


Link to post
Share on other sites

I'd never heard of those before, so I did some investigating and unsuccessfully tried to replace GetTickCount() with QPC()/QPF(). Then I eventually found this which actually deals with how to use them:

http://cplus.about.com/od/howtodothingsin1/a/timing.htm

 

That explains why division wasn't defined when I tried...

 

Anyways, am I right then in thinking that all I should have to do on a basic level is directly replace every call to GetTickCount() with the result of QPC().QuadPart/QPF().QuadPart?

 

Will try that and see what I get.

Share this post


Link to post
Share on other sites

Ok, I've tried a few different things using QPC()/QPF() and come to a point where the seconds seem to be progressing quite smoothly in general, but there's still a definite, though probably significantly rarer than before, occasional time when a second visibly hangs for a moment before the next second shows up and goes by more quickly to compensate.

 

At this point I'm just wondering about the practical reality of what I'm trying to do. Should I consider it good enough to stick with the result I've described, or should this be straightforward enough that there's no reason not to go for proper consistency? It's just that looking at how the code works and the main loop is set up, I can't logically see why it wouldn't lag sometimes and I can't figure out how to come up with a solution that would have seconds reliably change every second to within 100ms or whatever might be undetectable.

 

The interpolation value seems to be what I'd expect. I'm displaying the time 1 frame behind, and using the interpolation value to try and make it go more smoothly by having it display the new value closer to the time it should. I'm using OutputDebugString() to output a number for each update frame, along with the total milliseconds to be added that frame, followed by each render frame with the amount of milliseconds to add to the displayed time for that render frame and the interpolation value that was used to derive that amount.

Here's a sample:

Update ID: 119
Time Difference:41
9.225 | INTERPOLATION: 0.225
18.45 | INTERPOLATION: 0.45
26.65 | INTERPOLATION: 0.65
36.9 | INTERPOLATION: 0.9

Update ID: 120
Time Difference:39
7.8 | INTERPOLATION: 0.2
22.425 | INTERPOLATION: 0.575
31.2 | INTERPOLATION: 0.8

Update ID: 121
Time Difference:37
4.625 | INTERPOLATION: 0.125
12.025 | INTERPOLATION: 0.325
19.425 | INTERPOLATION: 0.525
26.825 | INTERPOLATION: 0.725
36.075 | INTERPOLATION: 0.975

Update ID: 122
Time Difference:46
14.95 | INTERPOLATION: 0.325
24.15 | INTERPOLATION: 0.525
35.65 | INTERPOLATION: 0.775
46 | INTERPOLATION: 1

As you can see, we have 1 case where there are 5 renderings between 2 update frames, and another where there's only 3. Am I right to assume that a situation like that (which, if I'm not wrong, seems to be unavoidable if using the DeWitters game loop) basically makes it impossible to display a consistent and accurate second-by-second count by its very nature?

 

If so, with this getting very complicated... do most developers not aim for such accuracy at all? Could something like multithreading help?

 

I suppose frame rate dips etc. are unavoidable and expected... in which case, am I making too much of a big deal about something that doesn't matter? Maybe it's just that I'm surprised seeing it at a stage when my "game" currently consists of nothing more than an image and some text overlaid with the time.

Share this post


Link to post
Share on other sites

Just to follow on from what frob said: If you have a 'pause' facility (maybe when opening a menu) and you get excessive amounts of time between updates, such as going to the debugger, a really neat trick is just to pretend the game was paused - you'll just carry on exactly where you left off.

Share this post


Link to post
Share on other sites

 

Scrolling back up to my earlier wall of text, use the results of QPC as a stopwatch. Tiny error in individual frames will work itself out over a larger time frame. Don't try to convert it back to milliseconds or do fancy division or averaging, just leave it at the high performance clock values.

 

K, so in other words... the occasional stutter is probably ok and in line with a reasonable implementation as long as the timing averages out?

 

I've tested it a few times myself against my wristwatch, and it always does seem to be in line with what I'd expect. i.e. I try to start my watch when the game timer is at 5 seconds, and when my watch says 1 minute, the game says 1:05. Quicker or slower seconds sometimes seem to happen more or less often, but I think someone would generally have to be staring at the clock waiting for it to happen in order to be likely to notice it.

 

As for converting to milliseconds etc...

I currently add the time difference to a boost::posix_time::ptime object in milliseconds, so my function for getting ticks looks like this:

	LARGE_INTEGER count;
	QueryPerformanceCounter( &count );

	LARGE_INTEGER freq;
	QueryPerformanceFrequency( &freq );

	return ( LONGLONG( 1000 ) * count.QuadPart )/freq.QuadPart;

Then I add the result of the difference between 2 readings, multiplied by interpolation, to the displayed time. There's nothing wrong with that, is there?

 

 

 

The only time you really need to stop that clock is when the elapsed time is obviously out of bounds.  Usually that happens when you stop the game in the debugger or the system otherwise has stalled.  That is easy enough to handle as a special case, along the lines of
elapsedTime = GetTimeSinceLastUpdateWithQPC();
if(elapsedTime > WAY_TOO_LONG) {
    elapsedTime = DEFAULT_TIMESTEP;
    Logger::Log( Simulator, Logger::Info, "Excessive time between updates exceeded, possibly from an OS stall or a debugger. Resuming with a sane time step.");
}
accumulatedTime += elapsedTime;
while(accumlatedTime >= SIMULATOR_FREQUENCY)
{
  simulator.runOneStep();
  accumlatedTime -= SIMULATOR_FREQUENCY;
}

Without that kind of protection you quickly fall into an update death spiral. One update takes more time than you allow, and when you come through the next time two updates need to run to catch up. Then because you spent more time it needs to run three, then four, then soon the game is doing nothing but trying to catch up.

 

Good point... I'll be sure to put that in as soon as I clean up the code and implement time progression "properly".

Share this post


Link to post
Share on other sites

As for converting to milliseconds etc...
I currently add the time difference to a boost::posix_time::ptime object in milliseconds, so my function for getting ticks looks like this:

	LARGE_INTEGER count;
	QueryPerformanceCounter( &count );

	LARGE_INTEGER freq;
	QueryPerformanceFrequency( &freq );

	return ( LONGLONG( 1000 ) * count.QuadPart )/freq.QuadPart;
Then I add the result of the difference between 2 readings, multiplied by interpolation, to the displayed time. There's nothing wrong with that, is there?

That is so wrong. The multiplication could overflow and you are wasting precision by converting too early.
Always keep the ticks as long as possible, subtract 2 tick values to get the time difference, then only on last moment before using the value for drawing convert it to milliseconds. Edited by wintertime

Share this post


Link to post
Share on other sites

K... I hadn't thought about it overflowing either... I'll try to clean things up.

 

Seems I ought to wait until the last moment to divide by the frequency too, if googling is anything to go by... and how it didn't work like I expected when I made a quick and lazy attempt to change it.

Share this post


Link to post
Share on other sites

Well, the best I could come up with that actually worked is the following:

timeEnd = timeStart;
timeStart = getQPC();

LONGLONG timeDiff = ( ( (  timeStart - timeEnd ) * LONGLONG( 1000 ) )/getQPCFreq() );

Where timeEnd and timeStart are LONGLONG variables, getQPC() returns the straight result of QueryPerformanceCounter(), and getQPCFreq() returns the frequency.

 

This way I'm getting the difference between the QPC ticks BEFORE multiplying by 1000 to convert to milliseconds and dividing by the frequency... so I guess that would fix the danger of overflowing since the result of 1000 times the difference should easily be small enough to fit in a LONGLONG?

 

As for in the main loop... where QPC() is being used to determine update framerate and interpolation... the problem is that time elapsed has to be in the same units as the loop variables e.g. for adding SKIP_TICKS to next_game_tick. Currently that means I'm multiplying the result of QPC() by 1000 and dividing by QPF(), leading to that risk of overflow.

 

So I assume if I could convert the value of SKIP_TICKS, in this case 40, to some format that would result in effectively adding 40 milliseconds to the QPC value, that it would all work out ok?

Is that as simple as finding X where:

( X * 1000 ) / QPFreq() == 40

?

 

i.e...

X == ( SKIP_TICKS * QPFreq() ) / 1000

Should give me the right value every time?

 

I'll go check if that works...

Share this post


Link to post
Share on other sites

I tried that and it seems to more or less work at first glance. The values obviously aren't off by too much at least, but I'm getting some strange behaviour when I read the values of variables that I'm outputting...

 

The interpolation values look a lot more precise and gradual now, and the display seems to look no worse than ever... but every so often it appears that a frame has its rendering stopped prematurely (i.e. there have been a few rendering passes and the interpolation value is not yet anywhere near 1) and then the next update frame has a time difference value of something absurd around 600 when it should be around 40. Then the next 15 or so update frames have a time difference of about 2 or so and show no sign of the rendering function being called at all (and the interpolation goes above 1.0... which should never happen, maybe something to do with so long since rendering?...) before everything resumes as normal.

Another strange thing: these seem to happen around the same times every time I run the program. I see one happens around the 60th update, another around the 140th.

 

Does this sound like some form of overflow at work? At least it looks like something is accumulating until it reaches something bad, and then "fixes" itself at least enough to work relatively normally.

 

I assume it's something to do with how I converted that SKIP_TICKS value... though I can't yet see why or how the time difference jumps so much all of a sudden. Between all of the possible casts going on and the kind of accuracy that's involved, it all could be something very minor and hard to notice.

 

Here's some sample output of what I've described. The Update ID and Time Difference lines are output every logical update frame, with the other lines each showing a rendering pass with the time difference multiplied by the interpolation, followed by the interpolation value itself for that frame.

Update ID: 63
Time Difference:40
4.66638 | INTERPOLATION: 0.116659
7.23423 | INTERPOLATION: 0.180856
9.89777 | INTERPOLATION: 0.247444
12.4778 | INTERPOLATION: 0.311946
14.8559 | INTERPOLATION: 0.371397
17.3153 | INTERPOLATION: 0.432882
19.8922 | INTERPOLATION: 0.497305
23.0212 | INTERPOLATION: 0.575531
26.2392 | INTERPOLATION: 0.655981
29.2069 | INTERPOLATION: 0.730173
31.8234 | INTERPOLATION: 0.795585
34.3607 | INTERPOLATION: 0.859017
36.6056 | INTERPOLATION: 0.915139
38.9679 | INTERPOLATION: 0.974197

Update ID: 64
Time Difference:40
3.63768 | INTERPOLATION: 0.0909421
6.05035 | INTERPOLATION: 0.151259
8.29023 | INTERPOLATION: 0.207256
10.294 | INTERPOLATION: 0.257351
12.723 | INTERPOLATION: 0.318075
15.0759 | INTERPOLATION: 0.376897
17.2527 | INTERPOLATION: 0.431317
19.6355 | INTERPOLATION: 0.490887
22.5972 | INTERPOLATION: 0.564929
25.4675 | INTERPOLATION: 0.636687
27.9269 | INTERPOLATION: 0.698172
30.5348 | INTERPOLATION: 0.763369
32.9358 | INTERPOLATION: 0.823394
35.8223 | INTERPOLATION: 0.895558
38.7406 | INTERPOLATION: 0.968514

Update ID: 65
Time Difference:39
3.49799 | INTERPOLATION: 0.0896919
5.5751 | INTERPOLATION: 0.142951

Update ID: 66
Time Difference:598

Update ID: 67
Time Difference:2

Update ID: 68
Time Difference:2

Update ID: 69
Time Difference:2

Update ID: 70
Time Difference:2
20.584 | INTERPOLATION: 10.292

Update ID: 71
Time Difference:4

Update ID: 72
Time Difference:2

Update ID: 73
Time Difference:2

Update ID: 74
Time Difference:2

Update ID: 75
Time Difference:2
11.2783 | INTERPOLATION: 5.63913

Update ID: 76
Time Difference:5

Update ID: 77
Time Difference:2

Update ID: 78
Time Difference:2

Update ID: 79
Time Difference:2

Update ID: 80
Time Difference:2
2.00298 | INTERPOLATION: 1.00149

Update ID: 81
Time Difference:4
0.47629 | INTERPOLATION: 0.119072
0.751395 | INTERPOLATION: 0.187849
1.00378 | INTERPOLATION: 0.250945
1.26182 | INTERPOLATION: 0.315456
1.52063 | INTERPOLATION: 0.380158
1.77522 | INTERPOLATION: 0.443805
1.98939 | INTERPOLATION: 0.497349
2.19667 | INTERPOLATION: 0.549167
2.49451 | INTERPOLATION: 0.623627
2.77416 | INTERPOLATION: 0.69354
3.00062 | INTERPOLATION: 0.750155
3.23221 | INTERPOLATION: 0.808053
3.43292 | INTERPOLATION: 0.858229
3.68048 | INTERPOLATION: 0.920119
3.9097 | INTERPOLATION: 0.977426

Update ID: 82
Time Difference:38
3.56342 | INTERPOLATION: 0.0937742
5.73105 | INTERPOLATION: 0.150817
8.22578 | INTERPOLATION: 0.216468
10.9996 | INTERPOLATION: 0.289463
13.7706 | INTERPOLATION: 0.362384
16.0441 | INTERPOLATION: 0.422213
18.2864 | INTERPOLATION: 0.481222
20.7137 | INTERPOLATION: 0.545097
23.057 | INTERPOLATION: 0.606764
25.5174 | INTERPOLATION: 0.67151
27.8717 | INTERPOLATION: 0.733465
30.2256 | INTERPOLATION: 0.79541
33.3237 | INTERPOLATION: 0.87694
36.6278 | INTERPOLATION: 0.963889

I'll have to go through this slowly...

 

EDIT: The code I'm using to derive the amount to add to next_game_tick is the following BTW:

LONGLONG SKIP_TICKS_QPC = ( LONGLONG( Engine::SKIP_TICKS ) * getQPFreq() ) / LONGLONG( 1000 );

Where Engine::SKIP_TICKS is an int.

 

Though I can now see that it would be possible for the result of getQPFreq() * 40 to overflow... the fact remains that IIRC it should all result in the same value every time, so that doesn't explain why it screws up only some of the time and with such regularity...

 

EDIT2: Yep, well it definitely does equal the same value every time.

 

EDIT3: Hmmm... studying it now and what happens seems to be that something unexpectedly causes the game to enter the update loop even though there should be tons of rendering time left before then. The only reasons this might happen are...

while( getQPC() > next_game_tick && loops < Engine::MAX_FRAMESKIP )

...if loops < Engine::MAX_FRAMESKIP (that's definitely not it), OR... if next_game_tick finds itself smaller than getQPC().

 

Sooo... considering this is happening after maybe... 2 or 3 renderings into an update frame, where the interpolation value may be 0.23432 or something before the rendering is abandoned for a logical update... and the only way next_game_tick can be changed is by having SKIP_TICKS_QPC added to it when inside the update loop, AND I know for a fact that SKIP_TICKS_QPC evaluates to the same value every time...

 

...does that make it seem as though next_game_tick overflowing in some way is the most likely explanation? If people are still following me.

 

As for why the time difference variable goes to around 600... maybe that's working fine and it's just thrown off by whatever next_game_tick's overflow is doing? Indeed, when I run it and look at the debug output window spewing lines of text, I often notice a very discernible pause before it starts up again. Could that be... 600ms of a pause? Looks likely that it probably is...

 

And yet I still can't quite figure out why it would actually be pausing for 600ms between 2 updates instead of just looping around calling update() on the active game state...

Edited by Sean_Seanston

Share this post


Link to post
Share on other sites

Ok, well I just printed out the value of next_game_tick for every update before and after SKIP_TICKS_QPC is added, but the difference between the 2 always seems to be equal to the same correct value...

 

Or could this just be some relatively normal result of the program simply lagging for a moment and having to catch up? Have I been trying to chase down a non-existent bug and the computer is just deprioritizing it for a moment while it does other things?

 

Either way, I must be learning things happy.png

Share this post


Link to post
Share on other sites

EDIT3: Hmmm... studying it now and what happens seems to be that something unexpectedly causes the game to enter the update loop even though there should be tons of rendering time left before then. The only reasons this might happen are...

while( getQPC() > next_game_tick && loops < Engine::MAX_FRAMESKIP )
...if loops < Engine::MAX_FRAMESKIP (that's definitely not it), OR... if next_game_tick finds itself smaller than getQPC().
 

Go back up to my first big wall of text where I specifically said to not do that.

Then go back up to my second block of text with a code snippit repeating and showing to accumulate time.

Never use a single block, a single count. Always accumulate time as you would in a stopwatch.

Count the time since the last time you came through the loop. Accumulate it in a buffer. When that time exceeds the amount of time in a simulator step, advance the simulator a single step. If the time exceeds two or three simulator steps, advance the simulator that many times. Each time you advance the simulator step, subtract that much time from the accumulator.
Then as a safety measure, if the time step of accumulated time is too long, cap it to a value like 3 or 4 or 5 simulator ticks.

Do not rely on individual time ticks. Individually they cause no end of problems. Accumulate time, and when enough time has passed on your stopwatch then advance your simulator.

Share this post


Link to post
Share on other sites

Go back up to my first big wall of text where I specifically said to not do that.

 

 

Hmm... reading that back again a few times, I'm not completely sure about a few things.

 

Perhaps I haven't explained my current system well enough. Sorry if this gets too long and rambly in parts, but I'll try to quickly summarize the situation of where I'm at now so we're definitely on the same page:

 

1. As I see it, I have 2 mostly separate but interconnected areas of code where I need timing. These are the main game loop itself, where I use timing to control the rate of rendering and game logic updates, and a GameTime class for time management where I'm using boost::posix_time to maintain an in-game date and time system for the gameworld.

2. The only way in which one directly influences the other is that the game loop calculates an interpolation value representing how far between update frames we are, which GameTime then uses to display a smoother clock (by showing the previous time, plus the difference between that and the current time multiplied by interpolation).

3. The game loop is based on the DeWitters loop described here:

http://www.koonsolo.com/news/dewitters-gameloop/.

4. The code you quoted in your last post is from the actual game loop itself in main.cpp, where it determines whether or not it's time to update the game logic (instead of rendering) based on the current amount of ticks compared to next_game_tick, which is a previous tick reading with time added to allow rendering between updates.

I'm wondering now did you assume that line to be a part of the GameTime class and controlling the logic for the actual in-game time progression? But speaking of which, there isn't anything about the DeWitters game loop that would be inherently bad for displaying a smooth clock progression is there? Timing is something that really gives me a headache in game development...

 

5. As for the GameTime logic, did you mean not to use the actual system time, e.g. 12:45? That's what I assumed, though I hadn't planned on linking in-game time to real-world time beyond in-game time progression being a ratio of real time. Or did you mean not to use the currently elapsed number of ticks...? I'm not 100% sure now.

6. For the Stopwatch comparison... while I haven't gotten into time acceleration yet or even simulating any logic beyond trying to make the clock display correctly (so I'll probably want to implement something like your accumulatedTime example when I do), if I understand it correctly then I THINK that's more or less what I'm doing now, in theory at least.

Here's the logic:

- I have 2 LONGLONG variables called timeStart and timeEnd.

- I then have 3 boost::posix_time::ptime objects called gameTime, prevGameTime and displayTime.

- At the start of each update frame in my time management class, I set timeEnd to the value of timeStart (set during the previous update), and timeStart to the current QPC() ticks.

- I then update the gameTime and prevGameTime objects like so:

LONGLONG timeDiff = ( ( ( timeStart - timeEnd ) * LONGLONG( 1000 ) )/XgetQPFreq() );

prevGameTime = gameTime;
gameTime = prevGameTime + milliseconds( timeDiff );

Seems to me, that if I want to use boost::posix_time::ptime, that this is the latest point when I can convert to milliseconds.

So now I have prevGameTime storing the previous update frame's value for gameTime, and gameTime has the time difference between frames added in the form of milliseconds, using boost::posix_time's milliseconds() function.

- The rendering function looks as follows (excluding the actual code that draws the text to screen):

LONGLONG timeDiff = ( ( ( timeStart - timeEnd ) * LONGLONG( 1000 ) )/XgetQPFreq() );
displayTime = prevGameTime + milliseconds( timeDiff * interpolation );

I calculate timeDiff the same way, then I set displayTime (which obviously will be used as the source of the time to be displayed on screen) to the value of prevGameTime with the time difference multiplied by the interpolation value again added in the form of milliseconds.

 

Is that about right? Haven't got any further than trying to display the time yet, so there's some cleaning up and reorganizing to do for when I start on the actual gameplay, but the time does appear to be functioning logically correctly now, and the interpolation value looks good too as far as I can tell.

It's just really that now and then there's a situation where the difference between calls of the main game loop goes from being around 40 to being around 600 or 700 (so rendering is disrupted for a moment while the game catches up) and I can't explain why. Looking at my debug output, everything seems to function properly as I would expect for a situation where 600ms have passed between loop runs.

 

Am I right in thinking that over half a second is a huge amount of time to pass between runs of a loop that normally runs at 40ms, and that you really wouldn't expect this to happen unless it was something wrong in my code that was causing it?

Or is it possible/normal/expected that a computer, even with 4 cores, may let a program like this hang now and then for as long as 700ms when it isn't even under heavy load?

It's the exacting nature of implementing a time system that worries me... I just don't want to ignore something now because it seems "ok", and then find many weeks down the road that I had it wrong and now my game doesn't work right and needs to be fixed unsure.png .

Share this post


Link to post
Share on other sites

I use timing to control the rate of rendering and game logic updates, and a GameTime class for time management where I'm using boost::posix_time to maintain an in-game date and time system for the gameworld.

 
This is probably way overkill. Do you need time zones, leap years and leap seconds and leap microseconds? Do you need daylight saving time? Do you need rather convoluted rules for posix date string formatting and parsing? Do you need microsecond time resolution? 
 
It is also likely to cause serious problems in practice. Fix your timestep. Failing to do so is a common source of physics bugs, of animation bugs, of navigation and motion bugs, and of other both exploitable and problematic issues.
 
Without a fixed timestep players on different classes of machines will experience very different behaviors, and it is often difficult to test or reproduce the behaviors during development.
 
This is why I wrote above to not simulate real life time inside game simulator time, doing so adds an enormous amount of complexity. Keep a simple number around to represent the time since your simulator's epoch.
 
A simple counter in your code is far easier than trying to emulate the amazingly complex time system used in real life. A time system based on an approximation of the time the Earth makes a revolution around its axis but in reality is always changing with every earthquake and even every meteor strike, and an approximation of the time the Earth orbits the Sun, which is off by roughly a quarter day but also slightly changes over time.

 

- At the start of each update frame in my time management class, I set timeEnd to the value of timeStart (set during the previous update), and timeStart to the current QPC() ticks. ... then update the gameTime and prevGameTime objects like so...

 
This is a big nasty bug waiting to happen.

 

Every time you take a step it is a variable amount of time.  You may advance 6ms, 8ms, 5ms, 15ms, 17ms, 6ms, 7ms, ... and that kind of variability causes problems.

 

Accumulate time, and only advance by the time step. If you picked a time of 10ms for your timestep, if at least 10ms has accumulated, advance a single step and subtract 10ms. So in the list above, first you accumulate 6ms and that isn't enough to advance a step. Then +8ms gives 14ms so you advance one step moving to 4ms accumulated. Adding 5ms gives 9ms accumulated, no simulate. Then you get +15ms bringing the total to 24ms so you advance the simulation twice, bringing you back to 4ms accumulated. +17ms gives 21ms accumulated so you advance the simulation twice and have 1ms accumulated. +6ms brings you to 7ms accumulated so no update. Then +7ms gives 14ms accumulated so advance your step by one.

 

While it may seem bad on paper to the player it provides much smother experiences. Everything moves at regular distances, physics works the same for everybody reducing the risk of various errant polygon problems. Collisions become problematic.  Minor errors in physics systems tend to explode during long steps, giving values that jump radically out of range, forces that launch game objects across the board at incredible speed, or even launch unsuspecting items into orbit and beyond.  Animations can be skipped partially or entirely, triggered events can be missed, and audio becomes a nightmare.

 

This goes back to fixing your timestep. Variable simulation is nasty business.

 

Am I right in thinking that over half a second is a huge amount of time to pass between runs of a loop that normally runs at 40ms, and that you really wouldn't expect this to happen unless it was something wrong in my code that was causing it? .. is it possible/normal/expected that a computer, even with 4 cores, may let a program like this hang now and then for as long as 700ms when it isn't even under heavy load?

 

Minor sputters and stalls like that happen all the time. Major pauses also happen quite often, but not as frequently. Your window gets resized or switched between apps, or the user switches between apps, or an alarm goes off on the system triggering some behavior on the machine, or the user intentionally deprioritizes your process, or the user shuts the laptop lid and reopens it later, or you stop the game in a debugger, or some other situation happens. 

 

A 700ms stutter isn't the kind of thing you see every frame, but you do see them often enough. It may happen because of bugs and glitches in your game. It may happen because the user happened to do something that spawned millions of particles and your buggy particle system didn't cap it. It may happen because the user took a moment to save the game and the game stalled as everything was written. Or maybe something external happened: immediately coming to mind are other processes fighting for resources, or the user switching processes, or sleep/suspend/wake cycles.

 

That is part of why it is vitally important that you fix your time step and only advance by incremental time steps and cap the time to a reasonable maximum step. For example, with a 10ms update you may want to cap it to 40ms maximum.

Share this post


Link to post
Share on other sites

This topic is 1063 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this