Fixed Time Step Game loop

Started by
15 comments, last by belfegor 10 years, 8 months ago

I've been trying to understand fixed time step game loops for about 3 weeks and I'm still having trouble understanding it. I've read http://lspiroengine.com/?p=378 http://gafferongames.com/game-physics/fix-your-timestep/ and http://www.koonsolo.com/news/dewitters-gameloop/ multiple times and I still can't wrap my head around it exactly. I understand that we want to render as fast as possible as well as updating the game logic at regular intervals. What I don't understand is why the interpolation value is used in the rendering function instead of the update logic function. Also, what would be the time that I would use to do my actual logic updates? Would it be the dt time (interpolation time) or what? If someone can do a mock up in paint to help me visually see how everything connects I would GREATLY appreciate it. I'm driving myself crazy trying to get this under my belt. Another problem I was having was my timer integer was "overflowing".

(I had a tough time understanding gafferon's tutorial. deWitter and Spiro I understood better)

Advertisement
Imagine you have an object moving at 100 units/sec.

In a variable timestep simulation (calculate elapsed time since last frame, update a variable amount based on this elapsed value, render) you often have very small increments of elapsed time. So each time the object's visual position is updated, it only moves a very small amount. Say your simulation is currently running at 60 FPS. That means at a given update, 1/60th of a second has elapsed. Thus, this time through the loop, the object will have moved 100 units/sec * 1/60 sec = 100/60 units = 1/66667 units. If units are pixels, then between this frame and the last, the visual representation of the object moved less than 2 pixels. The next frame, it will again move less than 2 pixels. And so on, until after 60 frames, or 1 second, the object has moved 100 pixels. The exact amount of elapsed time is allowed to vary, and probably will vary, so it is unlikely that you will always get 60 FPS. Some frames might be a larger timestep than others (and if there is background processing, this timestep can intermittently be quite significantly larger, large enough to potentially destabilize a physics integration), but all in all it should be fairly smooth.

Now imagine the same object in a fixed timestep simulation. The logical update rate of the fixed timestep is set to, say, 15 updates per second. (I know of older RTS games that used low rates like this, due to the sheer amount of logical processing). So if the object will update exactly 15 times per second, then an object moving 100 units per second will move 100/15, or 6.66667 pixels per step. This is fixed; the object will always move nearly 7 pixels when it moves. Without interpolation, this movement can animate fairly roughly, since a 7-pixel step is rougher than a 2-pixel step.

Interpolation allows you to mimic having the smooth visual update, while keeping the cleaner, more deterministic fixed update rate. The idea is that since the visible update rate is decoupled from the logic update rate, you can calculate a smoothing value based on how long it is since the last logic update, in relation to the intervale between updates. This gives you a factor, from 0 to 1, representing how far into the next step you are.

Using this smoothing value, you interpolate the visual transform of the object at the time of the last logic step and the transform at the time of the current logic step, based on the visual-framerate-derived smoothing value. This allows you to draw the object animating at small, incremental locations rather than the big 7-pixel steps that the logic simulation takes.

It requires you to track additional state (last transform in addition to current transform) but gives you the same smooth movement as a variable step.

Hmm. I think I understood all that. So the drawing portion of the fixed time step is almost like lag compensation of networked games except these predictions are pretty much 100% accurate right?

while( GetTickCount() > next_game_tick && loops < MAX_FRAMESKIP) {
update_game();

next_game_tick += SKIP_TICKS;
loops++;
}

How do I prevent my next_game_tick variable from overflowing and why do we add the SKIP_TICKS to it? Also, loops < MAX_FRAMESKIP is to prevent the "spiral of death" correct?

Show your implementation of GetTickCount(). Depending on what it is doing it could be a problem.

Prevent things from over-flowing by starting your timers at 0 every time your game is run. Search my article for the text, “Updating your timers has its own nuances”.

The section of my article just after that explains why you add SKIP_TICKS rather than just setting the current time etc.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

It's not really lag compensation, although it may help with that. Here's how I think of it:

You keep the update logic to fixed time step so that it is consistent. Variable time steps have all sorts of problems. Just try and make it so that when you jump that you reach the same height every time and you'll see what I mean!

The drawing is interpolated because otherwise everything you see is capped at the update rate you set. I.e. imagine that your update logic was running at 6 times a second, but you were getting 60fps. Assuming you hold the mouse still and just move straight forwards only, you would get 10 frames the same, then another 10 frames the same etc. It would look like 6fps!
But if all the frames interpolate everything then it looks like smooth 60fps again.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

For GetTickCount() I'm using gameTime.getElapsedTime().asMilliseconds() where gameTime is an sf::Clock (using SFML). It's the next_game_tick that overflows since it's always having a number get added to it (SKIP_TICKS) but nothing gets subtracted from it.

 
const int TICKS_PER_SECOND = 25;
    const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
    const int MAX_FRAMESKIP = 5;
    sf::Clock gameTime;
    //First tick is at 0.0 next tick is at 
    long int next_game_tick = 0;
    int loops;
    float interpolation;
 
    bool game_is_running = true;
    while( game_is_running ) {
 
        loops = 0;
        while( gameTime.getElapsedTime().asMilliseconds() > next_game_tick && loops < MAX_FRAMESKIP) {
            std::cout << "Updating" << std::endl;
 
            next_game_tick += SKIP_TICKS;
            loops++;
        }
 
        interpolation = float( gameTime.getElapsedTime().asMilliseconds() + SKIP_TICKS - next_game_tick )
                        / float( SKIP_TICKS );
        std::cout << interpolation << std::endl;
    }

Suppose John walks 1 meter per second and takes 2 long steps to do so. Jane also walks at a speed of 1 m/s but needs 4, shorter steps.

After 1 second, both John & Jane have walked 1 meter. After 2 seconds, both will have traveled 2 meters.

However, what happens after 0.25 seconds? John will have travelled 0m because he's still trying to do his first step. However Jane will already have travelled 0.25m because she has already taken a step, and is getting prepared to perform her second one.

Logic & Physics update is the same. We use interpolation in Graphics so that to the eyes (what we render) looks like John & Jane are taking the similar number of steps (and after 0.25 seconds, both John & Jane look like they traveled 0.25m). How many steps look like they're taking will depend on the rendering framerate.

We could do this for logic too, but it's advised not to do so. This interpolation can have undesired side effects (particularly with extreme interpolation/extrapolation values) and can induce into very rare and hard to find bugs that are triggered because it broke one of your logic's system assumptions or triggered a corner case.

What is worse, is that the probability of triggering these logic bugs depends on the speed of the CPU, so while everything works fine in your PC, the game is buggy as hell in a faster machine..... or slower ones. (Older games had these types of bugs. One game I can recall that was affected by those is Grim Fandango, there are more)

It also breaks determinism, which is a very desirable property. Without determinism, each run given the same input (i.e. key strokes, random seed) may yield different results, while a deterministic game will always give the same result (given the same input & after isolated all sources of undeterminism, such as user input, random seed).

Determinism is desirable because it allows you to reproduce bugs & crashes in no time (imagine trying to reproduce a bug that happens one in a thousand player matches!!)

Interpolating in Logic from fixed steps has almost the same effects as using variable frame rate (instead of fixed). All of this applies to Physics as well (which is the main point of Gaffer's article), but physics engines have the additional problem that simulations become unstable and the "bullet through paper" problem

So, in short, you could interpolate values for logic, but this doesn't mean you should. Just do it for graphics.

For GetTickCount() I'm using gameTime.getElapsedTime().asMilliseconds() where gameTime is an sf::Clock (using SFML). It's the next_game_tick that overflows since it's always having a number get added to it (SKIP_TICKS) but nothing gets subtracted from it.


 
const int TICKS_PER_SECOND = 25;
    const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
    const int MAX_FRAMESKIP = 5;
    sf::Clock gameTime;
    //First tick is at 0.0 next tick is at 
    long int next_game_tick = 0;
    int loops;
    float interpolation;
 
    bool game_is_running = true;
    while( game_is_running ) {
 
        loops = 0;
        while( gameTime.getElapsedTime().asMilliseconds() > next_game_tick && loops < MAX_FRAMESKIP) {
            std::cout << "Updating" << std::endl;
 
            next_game_tick += SKIP_TICKS;
            loops++;
        }
 
        interpolation = float( gameTime.getElapsedTime().asMilliseconds() + SKIP_TICKS - next_game_tick )
                        / float( SKIP_TICKS );
        std::cout << interpolation << std::endl;
    }

This is exactly the problem (and it is 2-fold).

That is one key point to remember, but even this has a nuance. Request the current time once and only once per frame. Then store it and use that as a reference for everything else you want to do. This is not about performance, although that is one consideration (polling the current time is slow on some CPU’s). This is about consistency. Consider the extreme opposite in which you request the current time every time you need it. Suddenly one of your objects is falling faster than the rest simply because the time value it got was higher, so it considered that more time had passed since its last update, and thus it fell further. Note that this is to be followed strictly as far as “what happens inside the game simulation”, but don’t read my bolded text blindly; you are free to request the current time as often as necessary on other threads, such as the input thread or sound thread if you want to time-stamp certain events (and you should).

#1: You can’t be requesting the current time every time you need a time value. Think about how it affects your inner loop when your time function returns a different value every time it is called. My article gives one example of the problems this can cause, as quoted.
#2: As I mentioned before, clock() is forbidden as a game timer. Use QueryPerformanceCounter() and friend.

Firstly, you need a solution to #2. You need to make your own CTime class (call it whatever you want) that is designed for handling times in games.
Core functions:
#1: Get the real time when created, store it as m_ui64LastRealTime. Also m_ui64CurTime = m_ui64LastTime = 0ULL.
#2: Once every frame, call Update() on that timer. The code is in the article.


	/**
	 * Gets the real system time.  Not to be used for any other purpose besides random-number
	 *	seeding.
	 *
	 * \return Returns the real system time in system tick units.
	 */
	LSUINT64 LSE_CALL CTime::GetRealTime() const {
		LSUINT64 ui64Ret;
		::QueryPerformanceCounter( reinterpret_cast<LARGE_INTEGER *>(&ui64Ret) );
		return ui64Ret;
	}

	/**
	 * Update the time.
	 *
	 * \param _bUpdateVirtuals If true, virtual values are updated as well.
	 */
	LSVOID LSE_CALL CTime::Update( LSBOOL _bUpdateVirtuals ) {
		LSUINT64 ui64TimeNow = GetRealTime();
		// Wrapping handled implicitly.
		LSUINT64 ui64Dif = ui64TimeNow - m_ui64LastRealTime;
		m_ui64LastRealTime = ui64TimeNow;

		UpdateBy( ui64Dif, _bUpdateVirtuals );
	}

	/**
	 * Update the time by a certain amount.
	 *
	 * \param _ui64Amnt Number of ticks by which to update the time.
	 * \param _bUpdateVirtuals If true, virtual values are updated as well.
	 */
	LSVOID LSE_CALL CTime::UpdateBy( LSUINT64 _ui64Amnt, LSBOOL _bUpdateVirtuals ) {
		m_ui64LastTime = m_ui64CurTime;
		m_ui64CurTime += _ui64Amnt;
		…

#3: Notice that GetRealTime() is called only once per frame. The time since the last update is then calculated and all the timer counters are adjusted by that much. Never get the real time more than once per frame. It’s changing, and nothing related to time should be changing until your next update (actually this is not quite true but I won’t confuse you with the details for now, especially not until you have made a game-suitable time class which is instance-based—never work with global time objects).
#4: The game loop begins by calling Update() on its timer. It uses the microseconds since the last update to determine how many logical updates it needs to do. This calculation is done before is actually does any logical updating. IE you can’t know how many logical updates you need to do by using a time value that is changing every time you call it. This is prepared in advance.

clock() is very inaccurate (I assume SFML is using this).

And you can’t do math with variables that are constantly changing.

Calculate everything up-front and then go.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

#1: You can’t be requesting the current time every time you need a time value. Think about how it affects your inner loop when your time function returns a different value every time it is called. My article gives one example of the problems this can cause, as quoted.
#2: As I mentioned before, clock() is forbidden as a game timer. Use QueryPerformanceCounter() and friend.
#3: Notice that GetRealTime() is called only once per frame. The time since the last update is then calculated and all the timer counters are adjusted by that much. Never get the real time more than once per frame. It’s changing, and nothing related to time should be changing until your next update (actually this is not quite true but I won’t confuse you with the details for now, especially not until you have made a game-suitable time class which is instance-based—never work with global time objects).

#1 and 3: I understand the reasoning for this and I will be changing that in my code. I liked the example you gave on your website about this. (Didn't implement it at the time of writing that code because I didn't think it would impact my testing so much, I was just trying to get the loop going)


#2: I looked up sf::Clock (well more like asked on the sfml forums) and it seems sf::Clock does use QueryPerformanceCounter() on windows and the equivalents on mac and linux platforms (so it's a cross platform clock). That being said, I should be able to use it just fine then as long as I getelapsedmilliseconds once and store it to a variable per frame correct?


How would I fix the overflow of next_game_tick? I would think I'd have to reset it somehow so it doesn't overflow but keep it in a way that it's still correctly getting the next game tick which brings me to another question. Wouldn't gameTime.getElapsedTime().asMilliseconds() eventually wrap around as well? (I think I understand the fixed time step and how it works, now it's just these 2 questions and I think I'll have it working.)

I appreciate your time everyone, I'm almost there >.<

EDIT:

So this is how the new loop looks

 
const int TICKS_PER_SECOND = 25;
    const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
    const int MAX_FRAMESKIP = 5;
    sf::Clock gameTime;
    //First tick is at 0.0 next tick is at 
    long int next_game_tick = 0;
    int loops;
    float interpolation;
 
    bool game_is_running = true;
    while( game_is_running ) {
        sf::Time curTime = gameTime.getElapsedTime();
        loops = 0;
        while( curTime.asMicroseconds() > next_game_tick && loops < MAX_FRAMESKIP) {
            std::cout << "Updating" << std::endl;
 
            next_game_tick += SKIP_TICKS;
            loops++;
        }
 
        interpolation = float( curTime.asMicroseconds() + SKIP_TICKS - next_game_tick )
                        / float( SKIP_TICKS );
        std::cout << interpolation << std::endl;
    }
    return 0;

#2: I looked up sf::Clock (well more like asked on the sfml forums) and it seems sf::Clock does use QueryPerformanceCounter() on windows and the equivalents on mac and linux platforms (so it's a cross platform clock). That being said, I should be able to use it just fine then as long as I getelapsedmilliseconds once and store it to a variable per frame correct?

Yes.

How would I fix the overflow of next_game_tick?

Firstly, don’t make signed what can never be negative. You lose an entire bit of positive precision.
Making it an unsigned 64-bit integer is a start.

Secondly, if it is an unsigned 64-bit integer, starts at 0, and counts in microseconds since the game has started, it would take 584,542.046091 years (not accounting for the slowing of Earth’s rotation) to overflow. You don’t need to worry about it unless you are not following all 3 of the above conditions.

Wouldn't gameTime.getElapsedTime().asMilliseconds() eventually wrap around as well?

I think you mean gameTime.getElapsedTime().asMicroseconds().
If it is starting from 0 and counting microseconds since the game started, no, it will not overflow within your lifetime.
If it is returning the real time in microseconds since epoch, no, it will not overflow within your lifetime.
If it is only used in a subtraction operation to determine how much time has passed since the last time it was called, overflow will be handled automatically by the laws of computer science.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

This topic is closed to new replies.

Advertisement