Framerate independent friction

Started by
35 comments, last by Cornstalks 11 years, 11 months ago

newVelocity = oldVelocity * exp(-friction * elapsedTime)


Hi guys, just wanted to take part to this interesting discussion and wanted to have some feedback about *not* using the exp function and using pow instead.
Let me explain why: i'm not sure if by using the approximation to the Euler's number is introducing some error that could accumulate in time or not, it could be barely noticeable anyway, but the thing that itch me with exp is that you need to work out yet another factor.
As an example, assuming we are working with a fixed timestep, let's suppose that your first version, at 60hz, is good with a friction of 0.975, then the exp factor should be ~1.5 so that:

exp(-1.5 * (1/60))=0.975309 and exp(-1.5 * (1/30))=0.951229

By using pow instead, you are sure you are not introducing any "error" and you don't need to "search" for the correct factor since:

pow(0.975, 60*(1/60))=0.975 and pow(0.975, 60*(1/30))=0.950625

I tested both in my engine and they behave and looks pratically the same, so it's basically a non-issue, but visual results aside, what do you all think from a mathematical standpoint?
Advertisement
Considering x^y = exp(ln(x)*y) using exp or pow is equivalent anyway (up to the "correction factor")...

Here are some:

1. If you use a fixed physics timestep but a variable render frame time (or a render frame time that isn't a multiple of the physics timestep), but don't interpolate the physics results, then a smoothly moving physics object will not move smoothly across the screen, as sometimes it will experience N updates, and sometimes N+1 (for a fixed render frame time).

Then interpolate. This isn’t a downside. It is just one more thing you have to do.



2.1 that's an additional cost (which might be significant if you've got a lot of objects),

The cost is far less than performing physics every frame. This should not be on the list because it is not a downside compared to the alternative. Performing full physics on every object every frame is a significant cost even without having a lot of objects.
Performing physics only once every 3 or 4 frames with interpolation of a single matrix every frame always wins.
This is actually a pro for fixed steps because they allow you to avoid waste. Performing physics with very small time steps a waste of CPU resources for most simulations since the improvement is exponentially less existent as the time step goes down.



2.2 Also there's some "uncertainty" about where the object is - so when a user shoots an object, where is the impulse applied?

I don’t know what you mean.
Interpolating objects is purely for graphical smoothness. The physical simulation is always based off their actual locations, not the interpolated locations.
There shouldn’t be uncertainty if this is understood and followed. The bullet is spawned based off the gun’s physical location. It hits the target where the physics simulation says it hits during one of those fixed-time updates, based on the physical positions of the bullet and the target.
There shouldn’t be any confusion there.

The interpolated matrix is for nothing but graphics and camera locks—if a camera follows an object it needs to follow the interpolated positions to avoid jitter, but this is still ultimately a graphics issue.



2.3 Debugging might be fun if your debug points etc don't match up with the interpolated object positions

This isn’t an issue. Interpolated positions are for graphics. It’s a single matrix that you send to the GPU. It is easily ignored during debugging sessions, where we will be focusing on nothing but the actual current physical location of the objects in question.



2.4 If the ratio between the physics timestep and render timestep is such that sometimes you run 2 physics steps, and sometimes 1 (for example), and the physics simulation time is significant, then the frame rate can jitter from frame to frame (a lower, but rock steady, framerate is likely better than a high but variable framerate). If there's a spike in the render frame time (e.g. due to disk access), then a fixed physics timestep will result in a spike in the next frame time etc.

This one is true, but not a situation that is handled better using a variable time-step. A fixed step can usually stutter back up to speed while maintaining simulation stability and consistency. A variable step will result in objects resting on tables to suddenly jump up into the air, stacks of things suddenly exploding, objects falling through floors into negative infinity.
A stutter in the framerate is an inconvenience to the player. An inconsistent simulation is a game-breaker.

Either way, fixed or variable, this actually hints at a deeper underlying problem, and the game should be redesigned to avoid potential simulations it can’t handle.



3 If the game allows running in slow motion (bullet time) you need to handle at least two, possibly very different timesteps, and the transition between them, otherwise either (a) you always update at the normal timestep, and the effects of interpolation will become obvious, or (b) you always run at the reduced timestep, with a cost that's excessive when running at the normal rate.

This isn’t how fast/slow motion is handled, and both fixed and variable steps handle this equally.
If you want your game to run at 1.5 speed, the “time since last update” is simply multiplied by 1.5 and accumulated, moving the game ahead that much faster.
The fixed-step simulation simply believes that more time has passed and thus performs updates 1.5 times more often. It still passes the same number of microseconds to the logical update/physics simulation, it is just doing it more often.

Quite elegant, yet still stable.



This was originally my quick placeholder before moving to fixed physics timestep and interpolation (which I've done before), but actually I think it works great so don't intend to change!

Then prepare for the consequences. Fixed stepping is always better than variable stepping. The only place in which variable stepping is acceptable is in small hobby projects that won’t see a lot of use by a lot of people.




I know some big commerical games use fixed, and some variable physics timesteps.

I would like to see some citations.
Given the cons of variable stepping, it is unlikely any professional game released since the 90’s is using them.

Here are the primary cons for variable steps:
#1: Physics simulation explosions.
#2: Inconsistent physics simulations leading to:
* #2.1: Players over networks seeing different things.
* #2.2: Missed collisions, things passing through other things, etc. This certainly can happen with fixed steps, but at least there it happens every time and is easy to debug/fix. The same fix for variable steps doesn’t work as well since the steps can be extra long.
#3: The simulation is done every frame, which is wasteful and decreases the FPS significantly.

It basically means that every game using physics or multiplayer over networks is using a fixed step.
Every game using CryEngine *, Unreal Engine *, id Tech Engine *, or Unity 3D is using a fixed step. That covers about every game out there except console RPG’s, but I can personally vouch for Final Fantasy games, Star Ocean, End of Eternity, Valkyrie Profile, and basically everything made between tri-Ace and Square Enix.
That essentially covers every game there is.


The only pro variable steps have over fixed is that they are easy to implement.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Performing physics only once every 3 or 4 frames with interpolation of a single matrix every frame always wins.


Assuming the minimum update rate for a physics engine is 30Hz - you're talking about games that are running at/over 100FPS. Yes there's a class of games that can (when uncapped) run at insanely high FPS, such that there's no benefit to using corresponding small physics timesteps. However, there's a lot of computationally/graphically intensive games, perhaps on console or mobile devices, that render at between 30 and 60FPS, and expect to perform a physics update every render update.

All I'm suggesting is that if your game normally runs at 30FPS, but occasionally lags when things get busy, it's definitely cheaper to run an occasional physics step of say 1/27 seconds, than it is to incur the cost of 1.1 fixed 1/30 steps, plus interpolation. Whether or not it's a good thing to do depends also on whether you have a well behaved physics engine (it sounds like you don't!), whether you are capping the maximum physics timestep, and don't have strong constraints on exact reproducibility.


Performing physics with very small time steps a waste of CPU resources for most simulations since the improvement is exponentially less existent as the time step goes down.
[/quote]

It's a bit irrelevant, but that is incorrect, in my experience (working a lot with character physics). The larger the timestep the greater the errors and inaccuracy of the approximations (e.g. linearity) in the solver - but importantly these errors all interact with each other, so the effects "multiply". I confess I haven't measured it, but my impression is that halving the timestep more than doubles the accuracy of, for example, ragdoll simulation (e.g. joint separation, joint limit violation etc all reduce by more than two). The effect is far greater than increasing the interation count of an iterative solver - i.e. I'd expect to see much better behaviour of a jointed chain at 60Hz with 4 iterations than at 30Hz and 16 iterations... Actually in some (common) situations solvers will not converge however many iterations one uses - the only practical way is improve behaviour is to reduce the timestep.



2.2 Also there's some "uncertainty" about where the object is - so when a user shoots an object, where is the impulse applied?

I don’t know what you mean.
Interpolating objects is purely for graphical smoothness.
[/quote]

It's not purely for graphical smoothness, because the player interaction with what is rendered, not necessarily what is simulated. If you use interpolation during bullet time (e.g. a 10x slowdown), you're going to see massive lag between shooting an object and seeing the interaction (actually, the lag will vary depending where you are in the interpolation). Also, assuming you shoot when the moving object is half way through an interpolation, where exactly do you apply the impulse to the object? The impact point that the player has chosen is not necessarily near the object at either of its two physical positions.


A variable step will result in objects resting on tables to suddenly jump up into the air, stacks of things suddenly exploding, objects falling through floors into negative infinity.
[/quote]

A good physics engine will not do these things if you're normally updating it at 60FPS and you occasionally give it a step of 1/50.

It sounds like you're making a case against variable steps based on either your experience with a bad physics engine, a badly set up physics engine, or maybe you're equating variable steps with large steps.



3 If the game allows running in slow motion (bullet time) you need to handle at least two, possibly very different timesteps, and the transition between them, otherwise either (a) you always update at the normal timestep, and the effects of interpolation will become obvious, or (b) you always run at the reduced timestep, with a cost that's excessive when running at the normal rate.

This isn’t how fast/slow motion is handled, and both fixed and variable steps handle this equally.
If you want your game to run at 1.5 speed...
[/quote]

Bullet time is used for when you want to run at, say, 0.1x normal speed. If you render at 60FPS, simulate physics at a fixed 1/30, and are using bullet time of 0.1 you'll only run physics 3 times a second. The gameplay lag with that would be unacceptable.



This was originally my quick placeholder before moving to fixed physics timestep and interpolation (which I've done before), but actually I think it works great so don't intend to change!

Then prepare for the consequences. Fixed stepping is always better than variable stepping. The only place in which variable stepping is acceptable is in small hobby projects that won’t see a lot of use by a lot of people.
[/quote]

All I was saying is that I don't intend to fix a problem that doesn't yet exist, especially when the solution has its own problems. So, I am prepared for the consequences, and they're good :)

I'd like other people to understand the problem too, and make their choice based on reason rather than dogma. Different situations/games call for different solutions.



I know some big commerical games use fixed, and some variable physics timesteps.

I would like to see some citations.
[/quote]

I'm not in a position to talk about that (or rather, I am in a position where I can't!). However, I stand by what I wrote.

A good physics engine will not do these things if you're normally updating it at 60FPS and you occasionally give it a step of 1/50.

It sounds like you're making a case against variable steps based on either your experience with a bad physics engine, a badly set up physics engine, or maybe you're equating variable steps with large steps.


or maybe you [are acknowledging that large steps are equally possible with variable-step engines]

The problem is that you aren’t. I see no reason to talk about a difference between updates ranging from 0.016 seconds to 0.02 seconds.

According to you, despite the fact that there is a well known understanding by basically every physics engine developer that fixed time steps are required for a stable physics simulation, all engines that don’t behave the same with steps at 0.02 seconds and at 1.2 seconds are poorly constructed.
According to you, your engine shows a marked improvement in simulation quality with smaller steps, but for some reason your engine is not poorly constructed even though extrapolation tells us it would not produce the same results at steps of 0.02 seconds and at 1.2 seconds.

You yourself mentioned an unexpectedly long delay between frames as one of the causes for problems with fixed time steps, even going out of your way to make sure we all understand that it can happen at any time for any reason, yet when I mentioned what happens to variable-step engines, suddenly A: My physics engine is bad, B: My physics engine was poorly set up, or C: I equate variable stepping with large steps (because we all know that large steps never happen with variable-stepping engines).


Your post was very informative.
It tells me that there is no further reason to continue this discussion.

Which is too bad because I had a great reply for bullet time.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

There's going to be a range of timesteps where a physics engine will work well for a given game/system. For human sized objects, and even the "best" physics engine, generally dt > 1/20 is too bad for real use even for simple rigid dynamics. If the engine is implemented well it should work for any timestep smaller than this - generally getting better and better (in terms of quality) as timestep approaches 0.

So - I'm not interested in (and haven't been) talking about huge timesteps. They're not going to work whether you have a fixed or a variable timestep system. If you end up having to handle a large update time you can run multiple physics updates with either system. In the case of a variable timestep system you can just divide the update time into as many steps as is required to make sure each individual physics update is shorter than the critical one you've determined for your system.

But I said that before... and also that it's dependent on the game/situation how the advantages/disadvantages of each stack up.

If you have a great idea for handling bullet time, please post it - I'm sure other gamedev users would appreciate your thoughts, because it can be a tricky problem.
If the ratio by which time is slowed down in bullet time is large, the delay introduced by the fixed timestep simulation might just become too noticeable, if all we do is multiply the ellapsed time by a constant as was suggested in this thread.

The solution that comes to mind is having the Physics use a "variable" timestep that has just too settings: one for real time and one for bullet time. Are there any obvious problems with this?

The solution that comes to mind is having the Physics use a "variable" timestep that has just too settings: one for real time and one for bullet time. Are there any obvious problems with this?


The transition is the difficult part. A stack of objects may be stable with dt = 1/30 and it may be stable with dt = 1/300, but when you transition from the former (which has likely got some constant non-destabilising error due to imperfect solver convergence) to the latter, the errors inherited from the previous timestep regime may cause glitches as the system adjusts.

Quite often engines use a simple stabilisation where the contact penetration error (for example) is added to the velocity constraint. That means the solver tries to remove, say, 1cm of penetration in 1/300 seconds - resulting in objects flying up into the air at 3m/s when you switch timesteps.

Also, if you have solver errors that don't reduce in proportion to the timestep, then your solver can actually get worse at small timesteps. I used to see this with PhysX 2.x - joint limits would make ragdolls explode when simulating in extreme low motion.

In Bullet you've got "split impulses", and in PhysX "velocity iterations" (etc) that help prevent this error fixup from introducing energy into the simulation. It can work very well, and all but eliminate explosions due to large timestep changes, let alone small ones.

Anyway - the point is that to handle the transition your physics engine needs to be able to handle the timestep varying from update to update. So... if it can one-off extreme transitions in the timestep, perhaps it can handle frequent minor variation in the timestep?
So i was reading the fixed vs variable timestep discussion and a question just striked mind: how does one implement a deterministic engine with such a variable timestep?
I mean, you need a predictable black box that, given a set of inputs always produces the same output, that's the basis for a stable replaying system: so what about that?
With a variable timestep you are making your system less and less predictable and you doing it on purpose, so recording user inputs on machine A won't play the same exact scene on machine B, so i just wonder how a variable timestep could handle that, if it could, of course.

The only point in favor of a variable timestep approach is that its downsides and instabilities are enough to demonstrate the many benefits of a fixed timestep.

So i was reading the fixed vs variable timestep discussion and a question just striked mind: how does one implement a deterministic engine with such a variable timestep?


Well you could simply record the timestep along with the inputs (and interpolate to render, of course).

Another option is to not use the physics engine for replay - simply store keyframes of all the important objects (and probably use particle physics for effects) and play back/interpolate.


The only point in favor of a variable timestep approach is that its downsides and instabilities are enough to demonstrate the many benefits of a fixed timestep.
[/quote]

Perhaps you could give some examples where varying the timestep by the amounts I've been talking about* would lead to instability?

* I've only been talking about varying it within the range where the physics engine gives good/stable results, remember.

This topic is closed to new replies.

Advertisement