Sign in to follow this  

Fixed update in game loop

Recommended Posts

The following code is part of my game loop (Dewitters Game Loop).

// Calculate the elapsed time.
const double delta_time = m_timer->GetDeltaTime();

// Perform the fixed delta time updates of the current scene.
if (m_fixed_delta_time) {
    fixed_time_budget += delta_time;
    while (fixed_time_budget >= m_fixed_delta_time) {
        FixedUpdate();
         fixed_time_budget -= m_fixed_delta_time;
     }
}
 else {
    FixedUpdate();
}

// Perform the non-fixed delta time updates of the current scene.
Update(delta_time);
               
 // Render the current scene.
Render();

But I wonder how could this game loop ever work? It will enter the so-called Spiral-of-Death for sure. Lets say you use a fixed time step of 0.5ms and require 4ms for rendering. You will basically increase your time budget with the render time every frame, waste most of that budget on FixedUpdates after which your rendering has no time any more, resulting in the continuous increase of your elapsed frame time (and thus your time budget and the number of FixedUpdates per frame).

You could of course clamp your time budget, but then you obtain something like abs(sin(t)) = #FixedUpdates, which does not look very nice for a game loop IMHO.

Edited by matt77hias

Share this post


Link to post
Share on other sites

I don't understand your reasoning. Also, do you mean each FixedUpdate lasts for 0.5ms, or that it is called every 0.5ms?

Could you write down 2-3 iterations of the game loop with time stamps? That should give us a better understanding of what you mean.

 

48 minutes ago, ApochPiQ said:

an 8th of the budget it requires

I don't know if I'm reading it correctly either, but I'm thinking "Render()" takes 4ms, and that "FixedUpdate" is completely disconnected from the 4ms figure.

Share this post


Link to post
Share on other sites
41 minutes ago, ApochPiQ said:

Maybe I'm reading too fast, but how would any game loop be able to behave gracefully if you give it an 8th of the budget it requires to do meaningful work for a tick?

I picked some values. Maybe I am biased as a graphics programmer which wants to assign the most time to rendering related tasks. But isn't rendering by far the most time consuming task for realistic looking, open-world games? Or should one really balance updating and rendering time?

Share this post


Link to post
Share on other sites

It has nothing to do with how much time rendering takes versus updating. It's a basic problem of arithmetic. If your budget is 0.5ms to finish a frame, you're going to get bad results with any code that takes 4ms instead.

I agree that writing down a sequence of events with timestamps would benefit everyone :-)

Share this post


Link to post
Share on other sites
11 hours ago, Lactose said:

I don't understand your reasoning. Also, do you mean each FixedUpdate lasts for 0.5ms, or that it is called every 0.5ms?

Could you write down 2-3 iterations of the game loop with time stamps? That should give us a better understanding of what you mean.

 

I don't know if I'm reading it correctly either, but I'm thinking "Render()" takes 4ms, and that "FixedUpdate" is completely disconnected from the 4ms figure.

:o Ok, didn't thought of that yet. I falsely assumed that FixedUpdate takes as long as the fixed delta time period. Render takes 4ms, FixedUpdate takes 0.5ms and the fixed delta time period will and must be much larger than 0.5ms such as the inverse of a frequency of 60 invocations per second. Therefore, you don't need to exclude rendering time or any other times.

So basically you specify how many times per second (or whatever time unit you use), you want to invoke FixedUpdate or alternatively you specify the frequency of FixedUpdates. Furthermore, this update frequency is maintained in the presence of arbitrary frame frequencies (i.e. FPS). Problems only appear when you cannot maintain your update frequency in practice, which will result in your updates starting to lag behind.

My bad.

Edited by matt77hias

Share this post


Link to post
Share on other sites

I don't think the goal should be to prioritize the time budget between rendering and logic, so much as just make sure that either one is not a bottleneck.  The point of the fixed time step is not to say "I am budgeting this much time to each frame" for performance or something like that, but rather to keep each step of your simulation consistent, regardless of how long you've given it.

Share this post


Link to post
Share on other sites

No, that is not necessarily the spiral of death.  Most games require far less time doing the update than it takes for the time to pass.  Your numbers show this quite well, if you think about it.

21 hours ago, matt77hias said:

Lets say you use a fixed time step of 0.5ms and require 4ms for rendering. You will basically increase your time budget with the render time every frame, waste most of that budget on FixedUpdates after which your rendering has no time any more, resulting in the continuous increase of your elapsed frame time (and thus your time budget and the number of FixedUpdates per frame).

In your example the fixed update is 0.5ms where it runs as many fixed updates as needed to catch up.  Also the rendering takes 4 ms.

Because rendering takes at least 4ms you will always need at least 4 simulation steps for every graphical frame.  But in practice you've probably got much longer than that, especially if you're using vsync for as a frame rate limiter, which most games do.

On a 120Hz screen you've got about 8.3 milliseconds per frame, so you'll probably need to run 16 or 17 fixed updates, and they must run within 4 milliseconds.

On a 75Hz screen you've got about 13.3 milliseconds per frame, so you'll probably need to run 26 or 27 fixed updates, and they must run within 9 milliseconds.

On a 60Hz screen you've got about 16.6 milliseconds per frame, so you'll probably need to run 32 or 33 fixed updates, and they must run within 12 milliseconds.

In these scenarios, it is only a problem if the number of updates take longer than the allotted time.  The worst case above is the 120 Hz screen, where an update processing step needs to run faster than 0.23 milliseconds; if it takes longer then you'll drop a frame and be running at 60Hz.  At 75Hz the update processing step must finish in 0.33 milliseconds before you drop a frame. At 60 Hz the update processing step must finish within 0.36 milliseconds.

Your frame rate will slow down, but as long as your simulation can run updates fast enough it should be fine.  If it drops to 30 frames per second then a 0.5ms processing step has more time, up to 0.44 milliseconds.  If it drops to 15 frames per second then the 0.5 ms processing step has up to 0.49 milliseconds per pass to run.

As long as your simulator can run a fixed update in less than that time the simulation is fine. You ONLY enter the "spiral of death" if the time it takes to compute the time interval takes longer than the values above. Since typically the simulation time is relatively fast it usually isn't a problem.

 

If the time it takes to compute a fixed time step is longer than the times above, and if you can't make it faster, then it may be necessary to change the simulation time step. Usually the only issue with that is the games feel less responsive, feel slower. Many of the older RTS games had a simulation rate of 4 updates per second, even though their graphics and animations were running at a much higher rate.

Even that may not be much of a problem. It all depends on the game.

Share this post


Link to post
Share on other sites
2 hours ago, frob said:

Because rendering takes at least 4ms you will always need at least 4 simulation steps for every graphical frame.

Just a small correction to this part -- since each simulation step is listed as being 0.5ms, we need at least 8 simulation steps for every graphical frame.

Share this post


Link to post
Share on other sites

Oh, right. Forgot to double that one.

Even so, as long as the processing of a timestep is faster than the wall-clock time it takes, all is well.  The faster the better; as the simulation time approaches wall-clock time it means you'll be dropping framerate.  As long as the simulation time is even the slightest time shorter to process than the time it represents, then you don't enter the Spiral of Death.

 

Share this post


Link to post
Share on other sites

Actual simulation time is ultimately unpredictable though, especially if there's the occasional spike, so you really need a way to handle this at runtime.

What you'll want to do is keep track of the error between the simulation time and the wall-clock time and either "skip" the simulation ahead, or add a few extra updates here and there, depending on actual measurements of the simulation and how it relates to the timestep (maybe keeping a small history to deduce short-term trends).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this