# Framerate independent friction

This topic is 2088 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I noticed that my game runs a bit differently when running at 60 FPS on my PC and 30 FPS on my phone, and I traced it down to the friction code. I'm applying friction like this:

newVelocity = oldVelocity - oldVelocity * friction * elapsedTime

The results are not consistent between 60 Hz and 30 Hz. But then I thought about how continuous compound interest is calculated, and I came up with this:

newVelocity = oldVelocity * exp(-friction * elapsedTime)

It works perfectly, at least on paper. Has anyone else ever used an approach like this? Or is there a better way to do it?

Edit: I should mention that the game is using a fixed time step. At 30 Hz it's 1/30 seconds and at 60 Hz it's 1/60. Edited by BradDaBug

##### Share on other sites
I think the better way to do it is to use a fixed time step for physics, independent of how fast the screen updates.

http://gafferongames.com/game-physics/fix-your-timestep/

##### Share on other sites
Personally I think fixed timestep physics is good, but some accomodation for varying time delta is nice because schedulers are never 100% precise.

##### Share on other sites

Personally I think fixed timestep physics is good, but some accomodation for varying time delta is nice because schedulers are never 100% precise.

what do schedulers have to do with a fixed timestep ?

If you use a fixed timestep you won't use elapsed time in the physics calculations.

updatePhysics(elapsedTime);

you do:

while (elapsedTime>timeStepSize) {
updatePhysics(timeStepSize);
elapsedTime-=timeStepSize;
}
where timeStepSize is constant so the physics calculations will be deterministics as you always give it the same delta to work with and instead call the physics update routine multiple times per frame if needed (or not at all if your framerate is higher than the physics update rate) Edited by SimonForsman

##### Share on other sites

It works perfectly, at least on paper. Has anyone else ever used an approach like this? Or is there a better way to do it?

It's the correct way of doing it... You are essentially multiplying with (1-friction) every time step so over multiple time steps the "correction" is (1-friction)^n and not [s]n*(1-friction)[/s] (1-n*friction) as you would get with the first version when increasing the timestep n times.

Edit: n*(1-friction) would be totally off Edited by japro

##### Share on other sites

what do schedulers have to do with a fixed timestep ?

Say for example your physics engine runs at 50Hz, so 0.02s between executing physics. Due to the nature of a multitasking OS it won't be precisely 0.02s every time. One time it might be 0.018, another time 0.023. The difference may not be visible... or it may. Depending upon the nature of the physics and the speed of the objects involved. Also ignoring varying delta would be bad if there was a requirement for repeatability, such as in MMO games.

Using some game platforms the timing variation may be minimal, but some systems that I've tinkered with can have variations of up to 20%.

##### Share on other sites

[quote name='SimonForsman' timestamp='1335678100' post='4935764']
what do schedulers have to do with a fixed timestep ?

Say for example your physics engine runs at 50Hz, so 0.02s between executing physics. Due to the nature of a multitasking OS it won't be precisely 0.02s every time. One time it might be 0.018, another time 0.023. The difference may not be visible... or it may. Depending upon the nature of the physics and the speed of the objects involved. Also ignoring varying delta would be bad if there was a requirement for repeatability, such as in MMO games.

Using some game platforms the timing variation may be minimal, but some systems that I've tinkered with can have variations of up to 20%.
[/quote]

When you use a fixed timestep the delta is constant, you don't pass the actual elapsed time to the physics subsystem. (if you run the physics at 50hz with a fixed timestep you always pass 0.02 as the delta, if the elapsed time is 0.023 you still pass 0.02 as the delta and carry the 0.003 you have left (so the next update triggers after an additional 0.017s have passed), Therefore the scheduler has no impact on the simulation. (You can even run the simulation without measuring the actual elapsed time if the result doesn't have to be displayed in realtime (if you're making a movie for example))

If you need to compensate for a variable delta then you're not using a fixed timestep, you're using a variable timestep. Edited by SimonForsman

##### Share on other sites

Say for example your physics engine runs at 50Hz, so 0.02s between executing physics. Due to the nature of a multitasking OS it won't be precisely 0.02s every time. One time it might be 0.018, another time 0.023. The difference may not be visible... or it may. Depending upon the nature of the physics and the speed of the objects involved. Also ignoring varying delta would be bad if there was a requirement for repeatability, such as in MMO games.

Using some game platforms the timing variation may be minimal, but some systems that I've tinkered with can have variations of up to 20%.

Pseudo-code for which the scheduler doesn't matter:
 float timestep = 1.0f / 30.0f; // 30 updates per second Clock clock; clock.start(); while (runMainLoop) { if (clock.time() >= timestep) { clock.reset(); updateGame(timestep); // use the *fixed* timestep } renderGame(); } 

##### Share on other sites
@Cornstalks: Very true. I wonder whether it might appear inconsistent or stuttery to a player when physics performed is constant but the real-time that it occurs in is not constant, but I really have no proof either way.

##### Share on other sites

@Cornstalks: Very true. I wonder whether it might appear inconsistent or stuttery to a player when physics performed is constant but the real-time that it occurs in is not constant, but I really have no proof either way.

True. In a well done game, the rendering is often interpolated. So you don't just naively call renderGame(), but you give it the system time as well and it interpolates the rendering between the current system time and the game time.

##### Share on other sites
You could read the link I posted, which explains exactly what fixing the timestep means, including interpolation. Just saying.

##### Share on other sites

You could read the link I posted, which explains exactly what fixing the timestep means, including interpolation. Just saying.

I figured he didn't read it, so I'd summarize it here

@jefferytitan: Everything I've said in this thread comes from what alvaro linked to...

##### Share on other sites

You could read the link I posted, which explains exactly what fixing the timestep means, including interpolation. Just saying.

Nice article. I wish I'd had time to read it earlier. Work is inconsiderate like that.

##### Share on other sites
Your exponential form will give you timestep independent behaviour.

However, it's not correct for (what's usually called) friction, since the dry frictional force is independent of velocity. You'd normally call a force or acceleration that's proportional to the velocity, damping (e.g. a damped spring, and your equation is exponential damping). Aerodynamic drag results in the force/acceleration normally being approximately proportional to the velocity squared.

If you want more realistic friction - e.g. for one object sliding on top of another, then calculate:

- The force normal to the contact plane, Fn

- The maximum force that can be generated by friction Fmax = Fn * frictionCoefficient

- The force that would bring the relative tangential velocity v to zero in the following timestep dt: F0 = m * v / dt if your object is a point mass, or if you're apply the frictional force at its centre of mass, assuming you're using Euler integration.

Then you apply minimum(Fmax, F0) - though you need to get the signs right! Comparing Fmax with F0 means that the friction won't cause oscillation when the velocity is very small.

##### Share on other sites
Here is more information on fixing your time step that may be of use—maybe save you from falling into a few potholes.
http://lspiroengine.com/?p=378

Fixing your time step will solve all of your problems, but only if done correctly.

L. Spiro

##### Share on other sites

Fixing your time step will solve all of your problems, but only if done correctly.

I disagree If your code is written correctly, and implements good approximations to the real world, then it will handle variable timesteps, to within the approximations. It won't behave exactly the same with different sized timesteps, but the difference will just depend on how good your approximations are. It might be that you find that the variation in frame to frame of the approximation is just too much to put up with, in which case you can, if you wish, decide to use a fixed timestep and run multiple physics steps per render frame*. However, I'd maintain that it's better to start out with a physics system that is capable of handling a variable timestep (because that will likely expose incorrect implementations - like the OP saw), than to start with a system that is locked to one timestep (because that will cover up incorrect implementations).

Also, fixing the timestep won't solve the problem, if the problem is that the friction model is wrong!

Some games use a fixed timestep, and some a variable timestep. There are advantages and disadvantages to both choices.

* Another option might be: If you find your physics goes bad when dt > maxDt, you could split your physics update into N iterations such that dt/N < maxDt (choose N as small as possible!). This will result in the physics sim staying within the good-approximation range, without having to do interpolation between physics frames, or having the physics update time != render update time.

##### Share on other sites
A variable timestep introduces stability issues (as in numerical stability). Especially if there is any sort of collision involved...

##### Share on other sites

A variable timestep introduces stability issues (as in numerical stability). Especially if there is any sort of collision involved...

Only if you're solving collisions with spring-dampers (but nobody really does that these days, do they?). In a rigid body simulator, handling collisions with impulses, I don't see any reason why a variable timestep would lead to numerical instability*, as the collisions get handled outside of the integration of the equations of motion (because collisions introduce a discontinuity, and the response is independent of timestep).

Large timesteps can lead to jitter/approximation problems, but that's not numerical instability, or the same as variable timesteps.

Explicit spring/dampers are unstable at large timesteps (depending on the spring/damping values), but that's not the same as saying they're unstable with variable timesteps. Implicit formulations should be stable at large timesteps.

Fixed and variable timesteps have both got advantages and disadvantages - I'm just saying it's best to understand them rather than saying one is always better than the other, or saying that one always leads to problems, or that one will solve all your problems.

* Small timesteps can lead to problems if using Baumgarte stabilisation, where solver errors are corrected by introducing a bias that looks like error/timestep. This can be especially bad with a variable timestep: if a large timestep on one frame results in poor solver convergence, which is then "corrected" on the next frame with a small timestep, leading to excessive correction velocities. However, there are ways to fix this.

##### Share on other sites
I don't think it has been mentioned, but fixed timesteps allow easier reproduction of the results of the simulation, which can be really helpful in debugging. I imagine it also makes synchronization in multi-player games much easier, since there is a common discretization of time, although I don't have any real experience with that.

I can't think of any disadvantages of fixed timesteps. You mentioned something about using a fixed timestep masking problems, but I can't see that as a real problem. MrRowl, do you care to provide other disadvantages? Edited by alvaro

##### Share on other sites

fixed timesteps allow easier reproduction of the results of the simulation

Indeed!

I can't think of any disadvantages of fixed timesteps. You mentioned something about using a fixed timestep masking problems, but I can't see that as a real problem. MrRowl, do you care to provide other disadvantages?
[/quote]

Here are some:

1. If you use a fixed physics timestep but a variable render frame time (or a render frame time that isn't a multiple of the physics timestep), but don't interpolate the physics results, then a smoothly moving physics object will not move smoothly across the screen, as sometimes it will experience N updates, and sometimes N+1 (for a fixed render frame time).

2. If you do interpolate between physics frames

2.1 that's an additional cost (which might be significant if you've got a lot of objects),

2.2 Also there's some "uncertainty" about where the object is - so when a user shoots an object, where is the impulse applied?

2.3 Debugging might be fun if your debug points etc don't match up with the interpolated object positions

2.4 If the ratio between the physics timestep and render timestep is such that sometimes you run 2 physics steps, and sometimes 1 (for example), and the physics simulation time is significant, then the frame rate can jitter from frame to frame (a lower, but rock steady, framerate is likely better than a high but variable framerate). If there's a spike in the render frame time (e.g. due to disk access), then a fixed physics timestep will result in a spike in the next frame time etc.

3 If the game allows running in slow motion (bullet time) you need to handle at least two, possibly very different timesteps, and the transition between them, otherwise either (a) you always update at the normal timestep, and the effects of interpolation will become obvious, or (b) you always run at the reduced timestep, with a cost that's excessive when running at the normal rate.

In my current home project (a flight sim for mobile devices), a smaller timestep always results in better physics quality. I just clamp the frame time to a max value (something like 0.1s), and always use N physics steps per frame. The user can choose N as a simulation quality setting - it's always at least 3 (or so). This way I get a constant physics simulation cost, so a really steady frame rate (depending on other stuff!), and I can check that everything will at least remain stable at the the maximum physics step time (0.033s). Faster devices get slightly better physics quality. This was originally my quick placeholder before moving to fixed physics timestep and interpolation (which I've done before), but actually I think it works great so don't intend to change!

I know some big commerical games use fixed, and some variable physics timesteps. There are problems to solve whichever way you go!

##### Share on other sites

newVelocity = oldVelocity * exp(-friction * elapsedTime)

Hi guys, just wanted to take part to this interesting discussion and wanted to have some feedback about *not* using the exp function and using pow instead.
Let me explain why: i'm not sure if by using the approximation to the Euler's number is introducing some error that could accumulate in time or not, it could be barely noticeable anyway, but the thing that itch me with exp is that you need to work out yet another factor.
As an example, assuming we are working with a fixed timestep, let's suppose that your first version, at 60hz, is good with a friction of 0.975, then the exp factor should be ~1.5 so that:

exp(-1.5 * (1/60))=0.975309 and exp(-1.5 * (1/30))=0.951229

By using pow instead, you are sure you are not introducing any "error" and you don't need to "search" for the correct factor since:

pow(0.975, 60*(1/60))=0.975 and pow(0.975, 60*(1/30))=0.950625

I tested both in my engine and they behave and looks pratically the same, so it's basically a non-issue, but visual results aside, what do you all think from a mathematical standpoint? Edited by dud3z

##### Share on other sites
Considering x^y = exp(ln(x)*y) using exp or pow is equivalent anyway (up to the "correction factor")... Edited by japro

##### Share on other sites

Here are some:

1. If you use a fixed physics timestep but a variable render frame time (or a render frame time that isn't a multiple of the physics timestep), but don't interpolate the physics results, then a smoothly moving physics object will not move smoothly across the screen, as sometimes it will experience N updates, and sometimes N+1 (for a fixed render frame time).

Then interpolate. This isn’t a downside. It is just one more thing you have to do.

2.1 that's an additional cost (which might be significant if you've got a lot of objects),

The cost is far less than performing physics every frame. This should not be on the list because it is not a downside compared to the alternative. Performing full physics on every object every frame is a significant cost even without having a lot of objects.
Performing physics only once every 3 or 4 frames with interpolation of a single matrix every frame always wins.
This is actually a pro for fixed steps because they allow you to avoid waste. Performing physics with very small time steps a waste of CPU resources for most simulations since the improvement is exponentially less existent as the time step goes down.

2.2 Also there's some "uncertainty" about where the object is - so when a user shoots an object, where is the impulse applied?

I don’t know what you mean.
Interpolating objects is purely for graphical smoothness. The physical simulation is always based off their actual locations, not the interpolated locations.
There shouldn’t be uncertainty if this is understood and followed. The bullet is spawned based off the gun’s physical location. It hits the target where the physics simulation says it hits during one of those fixed-time updates, based on the physical positions of the bullet and the target.
There shouldn’t be any confusion there.

The interpolated matrix is for nothing but graphics and camera locks—if a camera follows an object it needs to follow the interpolated positions to avoid jitter, but this is still ultimately a graphics issue.

2.3 Debugging might be fun if your debug points etc don't match up with the interpolated object positions

This isn’t an issue. Interpolated positions are for graphics. It’s a single matrix that you send to the GPU. It is easily ignored during debugging sessions, where we will be focusing on nothing but the actual current physical location of the objects in question.

2.4 If the ratio between the physics timestep and render timestep is such that sometimes you run 2 physics steps, and sometimes 1 (for example), and the physics simulation time is significant, then the frame rate can jitter from frame to frame (a lower, but rock steady, framerate is likely better than a high but variable framerate). If there's a spike in the render frame time (e.g. due to disk access), then a fixed physics timestep will result in a spike in the next frame time etc.

This one is true, but not a situation that is handled better using a variable time-step. A fixed step can usually stutter back up to speed while maintaining simulation stability and consistency. A variable step will result in objects resting on tables to suddenly jump up into the air, stacks of things suddenly exploding, objects falling through floors into negative infinity.
A stutter in the framerate is an inconvenience to the player. An inconsistent simulation is a game-breaker.

Either way, fixed or variable, this actually hints at a deeper underlying problem, and the game should be redesigned to avoid potential simulations it can’t handle.

3 If the game allows running in slow motion (bullet time) you need to handle at least two, possibly very different timesteps, and the transition between them, otherwise either (a) you always update at the normal timestep, and the effects of interpolation will become obvious, or (b) you always run at the reduced timestep, with a cost that's excessive when running at the normal rate.

This isn’t how fast/slow motion is handled, and both fixed and variable steps handle this equally.
If you want your game to run at 1.5 speed, the “time since last update” is simply multiplied by 1.5 and accumulated, moving the game ahead that much faster.
The fixed-step simulation simply believes that more time has passed and thus performs updates 1.5 times more often. It still passes the same number of microseconds to the logical update/physics simulation, it is just doing it more often.

Quite elegant, yet still stable.

This was originally my quick placeholder before moving to fixed physics timestep and interpolation (which I've done before), but actually I think it works great so don't intend to change!

Then prepare for the consequences. Fixed stepping is always better than variable stepping. The only place in which variable stepping is acceptable is in small hobby projects that won’t see a lot of use by a lot of people.

I know some big commerical games use fixed, and some variable physics timesteps.

I would like to see some citations.
Given the cons of variable stepping, it is unlikely any professional game released since the 90’s is using them.

Here are the primary cons for variable steps:
#1: Physics simulation explosions.
#2: Inconsistent physics simulations leading to:
* #2.1: Players over networks seeing different things.
* #2.2: Missed collisions, things passing through other things, etc. This certainly can happen with fixed steps, but at least there it happens every time and is easy to debug/fix. The same fix for variable steps doesn’t work as well since the steps can be extra long.
#3: The simulation is done every frame, which is wasteful and decreases the FPS significantly.

It basically means that every game using physics or multiplayer over networks is using a fixed step.
Every game using CryEngine *, Unreal Engine *, id Tech Engine *, or Unity 3D is using a fixed step. That covers about every game out there except console RPG’s, but I can personally vouch for Final Fantasy games, Star Ocean, End of Eternity, Valkyrie Profile, and basically everything made between tri-Ace and Square Enix.
That essentially covers every game there is.

The only pro variable steps have over fixed is that they are easy to implement.

L. Spiro Edited by L. Spiro

##### Share on other sites
Performing physics only once every 3 or 4 frames with interpolation of a single matrix every frame always wins.

Assuming the minimum update rate for a physics engine is 30Hz - you're talking about games that are running at/over 100FPS. Yes there's a class of games that can (when uncapped) run at insanely high FPS, such that there's no benefit to using corresponding small physics timesteps. However, there's a lot of computationally/graphically intensive games, perhaps on console or mobile devices, that render at between 30 and 60FPS, and expect to perform a physics update every render update.

All I'm suggesting is that if your game normally runs at 30FPS, but occasionally lags when things get busy, it's definitely cheaper to run an occasional physics step of say 1/27 seconds, than it is to incur the cost of 1.1 fixed 1/30 steps, plus interpolation. Whether or not it's a good thing to do depends also on whether you have a well behaved physics engine (it sounds like you don't!), whether you are capping the maximum physics timestep, and don't have strong constraints on exact reproducibility.

Performing physics with very small time steps a waste of CPU resources for most simulations since the improvement is exponentially less existent as the time step goes down.
[/quote]

It's a bit irrelevant, but that is incorrect, in my experience (working a lot with character physics). The larger the timestep the greater the errors and inaccuracy of the approximations (e.g. linearity) in the solver - but importantly these errors all interact with each other, so the effects "multiply". I confess I haven't measured it, but my impression is that halving the timestep more than doubles the accuracy of, for example, ragdoll simulation (e.g. joint separation, joint limit violation etc all reduce by more than two). The effect is far greater than increasing the interation count of an iterative solver - i.e. I'd expect to see much better behaviour of a jointed chain at 60Hz with 4 iterations than at 30Hz and 16 iterations... Actually in some (common) situations solvers will not converge however many iterations one uses - the only practical way is improve behaviour is to reduce the timestep.

2.2 Also there's some "uncertainty" about where the object is - so when a user shoots an object, where is the impulse applied?

I don’t know what you mean.
Interpolating objects is purely for graphical smoothness.
[/quote]

It's not purely for graphical smoothness, because the player interaction with what is rendered, not necessarily what is simulated. If you use interpolation during bullet time (e.g. a 10x slowdown), you're going to see massive lag between shooting an object and seeing the interaction (actually, the lag will vary depending where you are in the interpolation). Also, assuming you shoot when the moving object is half way through an interpolation, where exactly do you apply the impulse to the object? The impact point that the player has chosen is not necessarily near the object at either of its two physical positions.

A variable step will result in objects resting on tables to suddenly jump up into the air, stacks of things suddenly exploding, objects falling through floors into negative infinity.
[/quote]

A good physics engine will not do these things if you're normally updating it at 60FPS and you occasionally give it a step of 1/50.

It sounds like you're making a case against variable steps based on either your experience with a bad physics engine, a badly set up physics engine, or maybe you're equating variable steps with large steps.

3 If the game allows running in slow motion (bullet time) you need to handle at least two, possibly very different timesteps, and the transition between them, otherwise either (a) you always update at the normal timestep, and the effects of interpolation will become obvious, or (b) you always run at the reduced timestep, with a cost that's excessive when running at the normal rate.

This isn’t how fast/slow motion is handled, and both fixed and variable steps handle this equally.
If you want your game to run at 1.5 speed...
[/quote]

Bullet time is used for when you want to run at, say, 0.1x normal speed. If you render at 60FPS, simulate physics at a fixed 1/30, and are using bullet time of 0.1 you'll only run physics 3 times a second. The gameplay lag with that would be unacceptable.

This was originally my quick placeholder before moving to fixed physics timestep and interpolation (which I've done before), but actually I think it works great so don't intend to change!

Then prepare for the consequences. Fixed stepping is always better than variable stepping. The only place in which variable stepping is acceptable is in small hobby projects that won’t see a lot of use by a lot of people.
[/quote]

All I was saying is that I don't intend to fix a problem that doesn't yet exist, especially when the solution has its own problems. So, I am prepared for the consequences, and they're good

I'd like other people to understand the problem too, and make their choice based on reason rather than dogma. Different situations/games call for different solutions.

I know some big commerical games use fixed, and some variable physics timesteps.

I would like to see some citations.
[/quote]

I'm not in a position to talk about that (or rather, I am in a position where I can't!). However, I stand by what I wrote.

##### Share on other sites

A good physics engine will not do these things if you're normally updating it at 60FPS and you occasionally give it a step of 1/50.

It sounds like you're making a case against variable steps based on either your experience with a bad physics engine, a badly set up physics engine, or maybe you're equating variable steps with large steps.

or maybe you [are acknowledging that large steps are equally possible with variable-step engines]

The problem is that you aren’t. I see no reason to talk about a difference between updates ranging from 0.016 seconds to 0.02 seconds.

According to you, despite the fact that there is a well known understanding by basically every physics engine developer that fixed time steps are required for a stable physics simulation, all engines that don’t behave the same with steps at 0.02 seconds and at 1.2 seconds are poorly constructed.
According to you, your engine shows a marked improvement in simulation quality with smaller steps, but for some reason your engine is not poorly constructed even though extrapolation tells us it would not produce the same results at steps of 0.02 seconds and at 1.2 seconds.

You yourself mentioned an unexpectedly long delay between frames as one of the causes for problems with fixed time steps, even going out of your way to make sure we all understand that it can happen at any time for any reason, yet when I mentioned what happens to variable-step engines, suddenly A: My physics engine is bad, B: My physics engine was poorly set up, or C: I equate variable stepping with large steps (because we all know that large steps never happen with variable-stepping engines).

It tells me that there is no further reason to continue this discussion.

L. Spiro Edited by L. Spiro