C++: Easiest way to implement an in-game Time/Date system?...

Started by
32 comments, last by Sean_Seanston 9 years, 3 months ago

I tried that and it seems to more or less work at first glance. The values obviously aren't off by too much at least, but I'm getting some strange behaviour when I read the values of variables that I'm outputting...

The interpolation values look a lot more precise and gradual now, and the display seems to look no worse than ever... but every so often it appears that a frame has its rendering stopped prematurely (i.e. there have been a few rendering passes and the interpolation value is not yet anywhere near 1) and then the next update frame has a time difference value of something absurd around 600 when it should be around 40. Then the next 15 or so update frames have a time difference of about 2 or so and show no sign of the rendering function being called at all (and the interpolation goes above 1.0... which should never happen, maybe something to do with so long since rendering?...) before everything resumes as normal.

Another strange thing: these seem to happen around the same times every time I run the program. I see one happens around the 60th update, another around the 140th.

Does this sound like some form of overflow at work? At least it looks like something is accumulating until it reaches something bad, and then "fixes" itself at least enough to work relatively normally.

I assume it's something to do with how I converted that SKIP_TICKS value... though I can't yet see why or how the time difference jumps so much all of a sudden. Between all of the possible casts going on and the kind of accuracy that's involved, it all could be something very minor and hard to notice.

Here's some sample output of what I've described. The Update ID and Time Difference lines are output every logical update frame, with the other lines each showing a rendering pass with the time difference multiplied by the interpolation, followed by the interpolation value itself for that frame.


Update ID: 63
Time Difference:40
4.66638 | INTERPOLATION: 0.116659
7.23423 | INTERPOLATION: 0.180856
9.89777 | INTERPOLATION: 0.247444
12.4778 | INTERPOLATION: 0.311946
14.8559 | INTERPOLATION: 0.371397
17.3153 | INTERPOLATION: 0.432882
19.8922 | INTERPOLATION: 0.497305
23.0212 | INTERPOLATION: 0.575531
26.2392 | INTERPOLATION: 0.655981
29.2069 | INTERPOLATION: 0.730173
31.8234 | INTERPOLATION: 0.795585
34.3607 | INTERPOLATION: 0.859017
36.6056 | INTERPOLATION: 0.915139
38.9679 | INTERPOLATION: 0.974197

Update ID: 64
Time Difference:40
3.63768 | INTERPOLATION: 0.0909421
6.05035 | INTERPOLATION: 0.151259
8.29023 | INTERPOLATION: 0.207256
10.294 | INTERPOLATION: 0.257351
12.723 | INTERPOLATION: 0.318075
15.0759 | INTERPOLATION: 0.376897
17.2527 | INTERPOLATION: 0.431317
19.6355 | INTERPOLATION: 0.490887
22.5972 | INTERPOLATION: 0.564929
25.4675 | INTERPOLATION: 0.636687
27.9269 | INTERPOLATION: 0.698172
30.5348 | INTERPOLATION: 0.763369
32.9358 | INTERPOLATION: 0.823394
35.8223 | INTERPOLATION: 0.895558
38.7406 | INTERPOLATION: 0.968514

Update ID: 65
Time Difference:39
3.49799 | INTERPOLATION: 0.0896919
5.5751 | INTERPOLATION: 0.142951

Update ID: 66
Time Difference:598

Update ID: 67
Time Difference:2

Update ID: 68
Time Difference:2

Update ID: 69
Time Difference:2

Update ID: 70
Time Difference:2
20.584 | INTERPOLATION: 10.292

Update ID: 71
Time Difference:4

Update ID: 72
Time Difference:2

Update ID: 73
Time Difference:2

Update ID: 74
Time Difference:2

Update ID: 75
Time Difference:2
11.2783 | INTERPOLATION: 5.63913

Update ID: 76
Time Difference:5

Update ID: 77
Time Difference:2

Update ID: 78
Time Difference:2

Update ID: 79
Time Difference:2

Update ID: 80
Time Difference:2
2.00298 | INTERPOLATION: 1.00149

Update ID: 81
Time Difference:4
0.47629 | INTERPOLATION: 0.119072
0.751395 | INTERPOLATION: 0.187849
1.00378 | INTERPOLATION: 0.250945
1.26182 | INTERPOLATION: 0.315456
1.52063 | INTERPOLATION: 0.380158
1.77522 | INTERPOLATION: 0.443805
1.98939 | INTERPOLATION: 0.497349
2.19667 | INTERPOLATION: 0.549167
2.49451 | INTERPOLATION: 0.623627
2.77416 | INTERPOLATION: 0.69354
3.00062 | INTERPOLATION: 0.750155
3.23221 | INTERPOLATION: 0.808053
3.43292 | INTERPOLATION: 0.858229
3.68048 | INTERPOLATION: 0.920119
3.9097 | INTERPOLATION: 0.977426

Update ID: 82
Time Difference:38
3.56342 | INTERPOLATION: 0.0937742
5.73105 | INTERPOLATION: 0.150817
8.22578 | INTERPOLATION: 0.216468
10.9996 | INTERPOLATION: 0.289463
13.7706 | INTERPOLATION: 0.362384
16.0441 | INTERPOLATION: 0.422213
18.2864 | INTERPOLATION: 0.481222
20.7137 | INTERPOLATION: 0.545097
23.057 | INTERPOLATION: 0.606764
25.5174 | INTERPOLATION: 0.67151
27.8717 | INTERPOLATION: 0.733465
30.2256 | INTERPOLATION: 0.79541
33.3237 | INTERPOLATION: 0.87694
36.6278 | INTERPOLATION: 0.963889

I'll have to go through this slowly...

EDIT: The code I'm using to derive the amount to add to next_game_tick is the following BTW:


LONGLONG SKIP_TICKS_QPC = ( LONGLONG( Engine::SKIP_TICKS ) * getQPFreq() ) / LONGLONG( 1000 );

Where Engine::SKIP_TICKS is an int.

Though I can now see that it would be possible for the result of getQPFreq() * 40 to overflow... the fact remains that IIRC it should all result in the same value every time, so that doesn't explain why it screws up only some of the time and with such regularity...

EDIT2: Yep, well it definitely does equal the same value every time.

EDIT3: Hmmm... studying it now and what happens seems to be that something unexpectedly causes the game to enter the update loop even though there should be tons of rendering time left before then. The only reasons this might happen are...


while( getQPC() > next_game_tick && loops < Engine::MAX_FRAMESKIP )

...if loops < Engine::MAX_FRAMESKIP (that's definitely not it), OR... if next_game_tick finds itself smaller than getQPC().

Sooo... considering this is happening after maybe... 2 or 3 renderings into an update frame, where the interpolation value may be 0.23432 or something before the rendering is abandoned for a logical update... and the only way next_game_tick can be changed is by having SKIP_TICKS_QPC added to it when inside the update loop, AND I know for a fact that SKIP_TICKS_QPC evaluates to the same value every time...

...does that make it seem as though next_game_tick overflowing in some way is the most likely explanation? If people are still following me.

As for why the time difference variable goes to around 600... maybe that's working fine and it's just thrown off by whatever next_game_tick's overflow is doing? Indeed, when I run it and look at the debug output window spewing lines of text, I often notice a very discernible pause before it starts up again. Could that be... 600ms of a pause? Looks likely that it probably is...

And yet I still can't quite figure out why it would actually be pausing for 600ms between 2 updates instead of just looping around calling update() on the active game state...

Advertisement

Ok, well I just printed out the value of next_game_tick for every update before and after SKIP_TICKS_QPC is added, but the difference between the 2 always seems to be equal to the same correct value...

Or could this just be some relatively normal result of the program simply lagging for a moment and having to catch up? Have I been trying to chase down a non-existent bug and the computer is just deprioritizing it for a moment while it does other things?

Either way, I must be learning things happy.png

EDIT3: Hmmm... studying it now and what happens seems to be that something unexpectedly causes the game to enter the update loop even though there should be tons of rendering time left before then. The only reasons this might happen are...

while( getQPC() > next_game_tick && loops < Engine::MAX_FRAMESKIP )
...if loops < Engine::MAX_FRAMESKIP (that's definitely not it), OR... if next_game_tick finds itself smaller than getQPC().

Go back up to my first big wall of text where I specifically said to not do that.

Then go back up to my second block of text with a code snippit repeating and showing to accumulate time.

Never use a single block, a single count. Always accumulate time as you would in a stopwatch.

Count the time since the last time you came through the loop. Accumulate it in a buffer. When that time exceeds the amount of time in a simulator step, advance the simulator a single step. If the time exceeds two or three simulator steps, advance the simulator that many times. Each time you advance the simulator step, subtract that much time from the accumulator.
Then as a safety measure, if the time step of accumulated time is too long, cap it to a value like 3 or 4 or 5 simulator ticks.

Do not rely on individual time ticks. Individually they cause no end of problems. Accumulate time, and when enough time has passed on your stopwatch then advance your simulator.

Go back up to my first big wall of text where I specifically said to not do that.

Hmm... reading that back again a few times, I'm not completely sure about a few things.

Perhaps I haven't explained my current system well enough. Sorry if this gets too long and rambly in parts, but I'll try to quickly summarize the situation of where I'm at now so we're definitely on the same page:

1. As I see it, I have 2 mostly separate but interconnected areas of code where I need timing. These are the main game loop itself, where I use timing to control the rate of rendering and game logic updates, and a GameTime class for time management where I'm using boost::posix_time to maintain an in-game date and time system for the gameworld.

2. The only way in which one directly influences the other is that the game loop calculates an interpolation value representing how far between update frames we are, which GameTime then uses to display a smoother clock (by showing the previous time, plus the difference between that and the current time multiplied by interpolation).

3. The game loop is based on the DeWitters loop described here:

http://www.koonsolo.com/news/dewitters-gameloop/.

4. The code you quoted in your last post is from the actual game loop itself in main.cpp, where it determines whether or not it's time to update the game logic (instead of rendering) based on the current amount of ticks compared to next_game_tick, which is a previous tick reading with time added to allow rendering between updates.

I'm wondering now did you assume that line to be a part of the GameTime class and controlling the logic for the actual in-game time progression? But speaking of which, there isn't anything about the DeWitters game loop that would be inherently bad for displaying a smooth clock progression is there? Timing is something that really gives me a headache in game development...

5. As for the GameTime logic, did you mean not to use the actual system time, e.g. 12:45? That's what I assumed, though I hadn't planned on linking in-game time to real-world time beyond in-game time progression being a ratio of real time. Or did you mean not to use the currently elapsed number of ticks...? I'm not 100% sure now.

6. For the Stopwatch comparison... while I haven't gotten into time acceleration yet or even simulating any logic beyond trying to make the clock display correctly (so I'll probably want to implement something like your accumulatedTime example when I do), if I understand it correctly then I THINK that's more or less what I'm doing now, in theory at least.

Here's the logic:

- I have 2 LONGLONG variables called timeStart and timeEnd.

- I then have 3 boost::posix_time::ptime objects called gameTime, prevGameTime and displayTime.

- At the start of each update frame in my time management class, I set timeEnd to the value of timeStart (set during the previous update), and timeStart to the current QPC() ticks.

- I then update the gameTime and prevGameTime objects like so:


LONGLONG timeDiff = ( ( ( timeStart - timeEnd ) * LONGLONG( 1000 ) )/XgetQPFreq() );

prevGameTime = gameTime;
gameTime = prevGameTime + milliseconds( timeDiff );

Seems to me, that if I want to use boost::posix_time::ptime, that this is the latest point when I can convert to milliseconds.

So now I have prevGameTime storing the previous update frame's value for gameTime, and gameTime has the time difference between frames added in the form of milliseconds, using boost::posix_time's milliseconds() function.

- The rendering function looks as follows (excluding the actual code that draws the text to screen):


LONGLONG timeDiff = ( ( ( timeStart - timeEnd ) * LONGLONG( 1000 ) )/XgetQPFreq() );
displayTime = prevGameTime + milliseconds( timeDiff * interpolation );

I calculate timeDiff the same way, then I set displayTime (which obviously will be used as the source of the time to be displayed on screen) to the value of prevGameTime with the time difference multiplied by the interpolation value again added in the form of milliseconds.

Is that about right? Haven't got any further than trying to display the time yet, so there's some cleaning up and reorganizing to do for when I start on the actual gameplay, but the time does appear to be functioning logically correctly now, and the interpolation value looks good too as far as I can tell.

It's just really that now and then there's a situation where the difference between calls of the main game loop goes from being around 40 to being around 600 or 700 (so rendering is disrupted for a moment while the game catches up) and I can't explain why. Looking at my debug output, everything seems to function properly as I would expect for a situation where 600ms have passed between loop runs.

Am I right in thinking that over half a second is a huge amount of time to pass between runs of a loop that normally runs at 40ms, and that you really wouldn't expect this to happen unless it was something wrong in my code that was causing it?

Or is it possible/normal/expected that a computer, even with 4 cores, may let a program like this hang now and then for as long as 700ms when it isn't even under heavy load?

It's the exacting nature of implementing a time system that worries me... I just don't want to ignore something now because it seems "ok", and then find many weeks down the road that I had it wrong and now my game doesn't work right and needs to be fixed unsure.png .

I use timing to control the rate of rendering and game logic updates, and a GameTime class for time management where I'm using boost::posix_time to maintain an in-game date and time system for the gameworld.


This is probably way overkill. Do you need time zones, leap years and leap seconds and leap microseconds? Do you need daylight saving time? Do you need rather convoluted rules for posix date string formatting and parsing? Do you need microsecond time resolution?

It is also likely to cause serious problems in practice. Fix your timestep. Failing to do so is a common source of physics bugs, of animation bugs, of navigation and motion bugs, and of other both exploitable and problematic issues.

Without a fixed timestep players on different classes of machines will experience very different behaviors, and it is often difficult to test or reproduce the behaviors during development.

This is why I wrote above to not simulate real life time inside game simulator time, doing so adds an enormous amount of complexity. Keep a simple number around to represent the time since your simulator's epoch.

A simple counter in your code is far easier than trying to emulate the amazingly complex time system used in real life. A time system based on an approximation of the time the Earth makes a revolution around its axis but in reality is always changing with every earthquake and even every meteor strike, and an approximation of the time the Earth orbits the Sun, which is off by roughly a quarter day but also slightly changes over time.

- At the start of each update frame in my time management class, I set timeEnd to the value of timeStart (set during the previous update), and timeStart to the current QPC() ticks. ... then update the gameTime and prevGameTime objects like so...


This is a big nasty bug waiting to happen.

Every time you take a step it is a variable amount of time. You may advance 6ms, 8ms, 5ms, 15ms, 17ms, 6ms, 7ms, ... and that kind of variability causes problems.

Accumulate time, and only advance by the time step. If you picked a time of 10ms for your timestep, if at least 10ms has accumulated, advance a single step and subtract 10ms. So in the list above, first you accumulate 6ms and that isn't enough to advance a step. Then +8ms gives 14ms so you advance one step moving to 4ms accumulated. Adding 5ms gives 9ms accumulated, no simulate. Then you get +15ms bringing the total to 24ms so you advance the simulation twice, bringing you back to 4ms accumulated. +17ms gives 21ms accumulated so you advance the simulation twice and have 1ms accumulated. +6ms brings you to 7ms accumulated so no update. Then +7ms gives 14ms accumulated so advance your step by one.

While it may seem bad on paper to the player it provides much smother experiences. Everything moves at regular distances, physics works the same for everybody reducing the risk of various errant polygon problems. Collisions become problematic. Minor errors in physics systems tend to explode during long steps, giving values that jump radically out of range, forces that launch game objects across the board at incredible speed, or even launch unsuspecting items into orbit and beyond. Animations can be skipped partially or entirely, triggered events can be missed, and audio becomes a nightmare.

This goes back to fixing your timestep. Variable simulation is nasty business.

Am I right in thinking that over half a second is a huge amount of time to pass between runs of a loop that normally runs at 40ms, and that you really wouldn't expect this to happen unless it was something wrong in my code that was causing it? .. is it possible/normal/expected that a computer, even with 4 cores, may let a program like this hang now and then for as long as 700ms when it isn't even under heavy load?

Minor sputters and stalls like that happen all the time. Major pauses also happen quite often, but not as frequently. Your window gets resized or switched between apps, or the user switches between apps, or an alarm goes off on the system triggering some behavior on the machine, or the user intentionally deprioritizes your process, or the user shuts the laptop lid and reopens it later, or you stop the game in a debugger, or some other situation happens.

A 700ms stutter isn't the kind of thing you see every frame, but you do see them often enough. It may happen because of bugs and glitches in your game. It may happen because the user happened to do something that spawned millions of particles and your buggy particle system didn't cap it. It may happen because the user took a moment to save the game and the game stalled as everything was written. Or maybe something external happened: immediately coming to mind are other processes fighting for resources, or the user switching processes, or sleep/suspend/wake cycles.

That is part of why it is vitally important that you fix your time step and only advance by incremental time steps and cap the time to a reasonable maximum step. For example, with a 10ms update you may want to cap it to 40ms maximum.

This is probably way overkill. Do you need time zones, leap years and leap seconds and leap microseconds? Do you need daylight saving time? Do you need rather convoluted rules for posix date string formatting and parsing? Do you need microsecond time resolution?

I probably don't need any of those (though time zones could conceivably have a use), BUT... the one reason I did want to go with boost::posix_time is because I wanted a means of keeping track of dates with an accurate calendar. That way I could have events occur on certain dates, and generally make the time progression of the game more intuitive if certain things took weeks or months etc.

After looking around, boost::posix_time seemed the simplest choice for that, where I'd have some time/date classes and not have to worry about the inconvenience of manually adding something like 43 days to the 23rd of February (and deal with leap years etc.), as well as having it know what day of the week any given date was.

Is there a more reasonable alternative if I still want to have a calendar system that corresponds to the real world? I'm quite set on the idea of a calendar visibly ticking by and affecting the gameplay somewhat, also for a certain atmosphere.

Would it actually be a good idea to implement my own date system instead? Calculating leap years doesn't seem complicated (seems to be a total of 3 simple checks), and the progression of weekdays is just a constant cycle beginning with the weekday at a known point. I assume setting the date to an arbitrary value (e.g. 13th October 2059) wouldn't be complicated either once the date progression was done properly (use modulo a lot, I would think). I could be missing something huge here, but it doesn't seem as convoluted to me as I first assumed when I decided to use boost.

Could that be more practical than boost if I just need a clock and proper dates? Or would it be a huge undertaking?

It is also likely to cause serious problems in practice. Fix your timestep. Failing to do so is a common source of physics bugs, of animation bugs, of navigation and motion bugs, and of other both exploitable and problematic issues.

Interesting article... had some trouble understanding the exact specifics of how it works, but hopefully I have a decent idea by now...

When comparing to the DeWitters game loop, I can see they both use an interpolation value to render in-between the actual physics states, but the main difference is the introduction of the constant dt value which is independent from the actual time between updates, that being represented by frameTime.

If I understand this correctly then the end result is that:

- Updates will happen exactly every dt mlliseconds (assuming we're using milliseconds) on average, with updates only taking place for every whole multiple of dt milliseconds. Though the real time between updates will still presumably fluctuate.

- Unexpectedly long times between frames will result in skipping render frames to catch up with more update frames, with the simulation progress between update frames being capped at a certain level, resulting in the simulation actually slowing down.

- Due to the update frequency fluctuating, interpolation is used to smooth rendering between updates (which come dt milliseconds apart on average, but aren't consistent).

Is that it?

I also notice the integrate() function... that's not relevant for the general case of a game loop is it? It looks to me like it's just a continuation of his article on integration that he links to at the top of the page. Considering dt is constant and he passes dt into integrate(), I assume it's just for the specific example...

This is why I wrote above to not simulate real life time inside game simulator time, doing so adds an enormous amount of complexity. Keep a simple number around to represent the time since your simulator's epoch.

What kind of data type would be appropriate for keeping time?

If I assume my game will factor in time down to the minute (probably going by at the rate of 1 in-game minute per 1 real-life second at the slowest speed), should minutes be used as the base unit?

e.g. I might have the game epoch start on... June 4th 2086 at 12:43 (possibly an illogical choice, but let's make it random for argument's sake), so if someone started playing a character of age 20 at that date and characters could die of old age, then we'd never need more than say 100 years of game time to factor in (realistically, half that might be fine but if we wanted to allow for some extreme circumstances).

Taking a minute as the base time, we would need 24x60 = 1440 minutes for each day, 365x1440 = 525600 minutes for each year, and then a total of 52,560,000 minutes for an entire 100 year game. Which would actually easily fit in even a signed 32-bit integer... which is nice.

Do I have the right idea or am I way off?

So if the game allowed various start dates and the earliest might be hardcoded to January 1st 2050 00:00, then starting on that date would cause a simple int variable for the time to read 585 at 9:45 that same day. Then I could use my custom date class to convert a number of minutes to the correct weekday, month, year etc. through division, modulo etc. operations and knowing if it's a leap year and how many days should be in each month.

Starting a year later then would just involve initializing the time variable to 525600 and the date class could derive the correct date from that.

When comparing to the DeWitters game loop, I can see they both use an interpolation value to render in-between the actual physics states, but the main difference is the introduction of the constant dt value which is independent from the actual time between updates, that being represented by frameTime.

If I understand this correctly then the end result is that:
- Updates will happen exactly every dt mlliseconds (assuming we're using milliseconds) on average, with updates only taking place for every whole multiple of dt milliseconds. Though the real time between updates will still presumably fluctuate.
- Unexpectedly long times between frames will result in skipping render frames to catch up with more update frames, with the simulation progress between update frames being capped at a certain level, resulting in the simulation actually slowing down.
- Due to the update frequency fluctuating, interpolation is used to smooth rendering between updates (which come dt milliseconds apart on average, but aren't consistent).

The interpolation value is because if you are drawing at say, 60 FPS and only updating at 30 hz, you're drawing every ~16.6 ms and only moving things every ~33.3 ms. The point is to conceptually think of real time and game time as separate.

When you're using variable time you're literally injecting the time that passed since the last frame into the game update(at least as accurately as we can measure it.) If it took 9.12 ms then we will be telling game objects to update by that much. With a fixed time step its more like you collect time in an accumulator and the simulation consumes it when it needs to advance, if it took us 90 ms to draw then the update loop will run once, subtract the time spent(~33.3 ms) ~56.7, run the loop again, ~23.4 ms. Now we have run two simulation loops in a very small window of time but the game code jumps ahead as if it were simulating fixed sets of time.

The interpolation value comes into play because when you aren't updating, nothing in your game will move, you'll literally be drawing the same scene redundantly. In that case you might as well have the FPS locked to whatever the update rate is. We get around that problem by using interpolation, we take the previous and current position of objects and draw states inbetween, to give the illusion the game is in motion still, even though it isn't actually updating. This all happens in a millisecond window, so it is hard if not impossible for the viewer to actually understand we are a frame behind, but the smooth motion makes a very noticable impact.

I also notice the integrate() function... that's not relevant for the general case of a game loop is it? It looks to me like it's just a continuation of his article on integration that he links to at the top of the page. Considering dt is constant and he passes dt into integrate(), I assume it's just for the specific example...

He passes dt into integrate because integrate is basically pseudocode for "do a bunch of physics simulation" even if your timestep is fixed you probably want to pass time to the simulation, because if you don't then all of your code will be relying on how many times it is called rather than a value passed in, if you decide to adjust the fixed update rate of the game, suddenly everything is moving at a completely different speed or your debuff timers are counting down faster or slower than they should, quite awkward!

Passing in time is a sensible way to make it so you can change the top level code without any of the lower down code having to worry about -how- you get that time value, simply that you are getting it.

What kind of data type would be appropriate for keeping time?

In a purist sense? A 64 bit unsigned integer would probably give you the most accurate results. In practice you could use something like a double because of the extreme precision, but ideally you always want to keep your time values as accurate as you can until you use them. I.e. if you want to interpolate between two positions don't turn that time into a float value until you actually divide the two times, then you only lose precision at -one- point. Integers in general will always be superior to floating point in terms of accuracy, unless your clock is giving a float value an int value will always retain perfect accuracy, even if it doesn't match up ideally to real time.

With a fixed time step its more like you collect time in an accumulator and the simulation consumes it when it needs to advance, if it took us 90 ms to draw then the update loop will run once, subtract the time spent(~33.3 ms) ~56.7, run the loop again, ~23.4 ms. Now we have run two simulation loops in a very small window of time but the game code jumps ahead as if it were simulating fixed sets of time.

Interesting... so I guess it sort of lets the simulation do its own thing a bit more, only indirectly influencing it through the accumulator. Sounds like increased encapsulation, which is nice.

Still trying to get my head around exactly how the fixed timesteps work in practice in conjunction with interpolation and the fixed steps coming with different real time between them. I guess it's just a matter of the interpolation managing to get rid of any noticeable stuttering, and the advantage over the DeWitters game loop with the variable time step is just that we know exactly what's going to happen every simulation step with complete consistency?

Actually, just thinking now... does all this mean that in practice with a variable timestep you'd likely need to keep track of the passage of actual time to properly implement gameplay mechanics (e.g. 10 second buff), whereas with a fixed timestep you could completely dispense with actual timing and just control rates of change with simple values multiplied by the timestep? Actually making the simulator code shorter and cleaner?

He passes dt into integrate because integrate is basically pseudocode for "do a bunch of physics simulation" even if your timestep is fixed you probably want to pass time to the simulation, because if you don't then all of your code will be relying on how many times it is called rather than a value passed in, if you decide to adjust the fixed update rate of the game, suddenly everything is moving at a completely different speed or your debuff timers are counting down faster or slower than they should, quite awkward!

Passing in time is a sensible way to make it so you can change the top level code without any of the lower down code having to worry about -how- you get that time value, simply that you are getting it.

Oh yeah, I can see that now, makes sense.

In a purist sense? A 64 bit unsigned integer would probably give you the most accurate results. In practice you could use something like a double because of the extreme precision, but ideally you always want to keep your time values as accurate as you can until you use them. I.e. if you want to interpolate between two positions don't turn that time into a float value until you actually divide the two times, then you only lose precision at -one- point. Integers in general will always be superior to floating point in terms of accuracy, unless your clock is giving a float value an int value will always retain perfect accuracy, even if it doesn't match up ideally to real time.

Makes sense...

How about my plan for keeping track of simulator "minutes" elapsed in a 32-bit integer?

Then if I understand it...I could set the epoch to January 1st 2000, have a minutesElapsed value, and if the timestep was 40ms and a simulator minute was equal to a real-life second, I could increment minutesElapsed every 25 frames (probably use some kind of secondary variable that accumulates timesteps) and use that to derive the date and time with custom date/time classes?

Now trying to change my game loop and implement the fixed timestep. I'm just wondering about a few practical issues with data types etc.

When it says hires_time_in_seconds(), that would clearly correspond to my function getQPC(), which returns the quadpart of the result of QueryPerformanceCounter(). Whether I convert it to seconds or not I guess doesn't matter, but initially I assume I should just keep the raw value for accuracy.

SO... that means double currentTime from the example has to become a LONGLONG to store the quadpart, newTime also has to be a LONGLONG and I'm assuming the example's time() function is just shorthand for the same time function as before.

But once we get to calculating frameTime, which is the result of newTime - currentTime, both being of type LONGLONG, am I right in assuming it won't be necessary to change frameTime from a double as in the example to another LONGLONG, because frameTime should inherently be quite small and therefore should always fit in a double?

It represents the time passed between 2 logical frames, so if we assume that might be... 40 or 50ms most of the time, and capped like in the example to 0.25 of a second, then am I right in thinking a double should work, even though this value will be the raw QPC value before it's divided by the frequency?

Then for capping the value of frameTime, I could check if the value of it divided by the frequency (i.e. its value in seconds) was greater than 0.25, and if so... set its value to 0.25 multiplied by the frequency (which hopefully would make its value exactly equal to 0.25 of a second), correct? Or in practice... divide the frequency by 4?

I could then add the frameTime converted to seconds (as the example seems to use, is that a good idea? I suppose a double must be precise enough...) to the accumulator, and presumably the rest of the code could stay like in the example...

One more thing: double t, is that actually needed for a game loop? Seems like it was just used to accumulate total time passed for his integration example. I'm assuming I can just forget about it unless I've misunderstood.

That would leave the loop looking something like this:


double dt = 0.01;

LONGLONG currentTime = getQPC();
double accumulator = 0.0;

State previous;
State current;

while ( !quit )
{
    LONGLONG newTime = getQPC();
    double frameTime = double( ( newTime - currentTime )/getQPFreq() );
    if ( frameTime > 0.25 )
        frameTime = double( getQPFreq()/4 );
    currentTime = newTime;

    accumulator += frameTime;

    while ( accumulator >= dt )
    {
        previousState = currentState;
        update( currentState, dt );

        accumulator -= dt;
    }

    const double alpha = accumulator / dt;

    State state = currentState * alpha + 
        previousState * ( 1.0 - alpha );

    render( state );
}

Does that look right in theory?

In theory, it is better.

Implementation details are a mess, though.

double frameTime = double( ( newTime - currentTime )/getQPFreq() );
if ( frameTime > 0.25 )

frameTime = double( getQPFreq()/4 );

^^ This will not give you the results you think it does. Integer division results in an integer.

Also, why the floating point? You complain about the accuracy thinking long will not work for you with 32-bit precision, so you move to a 64-bit type ... then you discard it and go with a double that has even less bits of precision. Just stick with integers (either 32-bit or 64-bit), it will make your life easier.

Also, where do the magic numbers come from? Do the 0.25 and the 4 have any significance?

This topic is closed to new replies.

Advertisement