What if a client can't run at desired fixed time step?

Started by
13 comments, last by Hodgman 10 years, 1 month ago

I keep reading about fixing your time step to help improve synchronization when dealing with movement and such (i.e. velocities) across a multiplayer game. I notice that most people seem to pick something like 60Hz as their fixed time step. What happens though if you have a client that cannot physically run that fast, such as an old laptop that can only manage to run at 28Hz for example?

Would you just run as fast as possible in this case with the delta time taking care of things in equations? Isn't this just the same then as variable time step? Therefore is it really "fixed time step as long as the client can attain it, otherwise variable time step?"

Advertisement

You pick a time step that can be managed on your lowest spec target machine, if someone runs the app on a machine with a lower spec then it isn't supported. So, it comes down to the developer choosing what spec they are targeting at and writing the game code so that it fits.

Options available include reducing or completely disabling non-essential systems based on machine spec. etc.

n!

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?

What happens though if you have a client that cannot physically run that fast


Then your machine is below the minimum acceptable spec for the game.

Seems pretty high if that needs to account for the minimum spec?


Not really. Physics typically needs to run at a high rate, to avoid "tunneling" through thin walls and the like. Graphics, however, does not have a typical "minimum frame rate" (although below 5 Hz is going to be painful to play most games.)

Additionally, most machines have ample CPU (and perhaps RAM) but there are orders of magnitude of difference in GPU power. Thus, it makes sense to fix physics at some rate that's high enough for solid simulation, and high enough that "typical" vsync render frame rates will end up displaying one step per frame. On low-spec machines, not all simulated physics steps will be rendered, but that's typically better than the alternative.
enum Bool { True, False, FileNotFound };

Graphics, however, does not have a typical "minimum frame rate" (although below 5 Hz is going to be painful to play most games.)

Just an OT nitpick, but graphics update definitely matter for many games, any action game where fast reactions are needed (like for example a car racing game) needs a high and stable graphics update too, to be playable.

Or at least the enjoyment factor increase a lot smile.png

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?


Keep in mind this is the timestep just for physics updates. AI/gameplay and graphics and run at lower rates. You do have to be careful of the "spiral of doom" where physics takes up too much time so it has to run multiple steps next frame, making it take up even more time, until your game hangs. One option is to cap physics update time so if the machine can't cope with the workload, the game just slows down (so one physics second != one real second).

The idea of a fixed time step is just that the time the physics update uses is a fixed, non-variable rate. How that rate maps to real time is a different issue.

Sean Middleditch – Game Systems Engineer – Join my team!

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?

What kind of machines are you worried about running on? Games ran at 50 fps (PAL) on my old zx spectrum, even low spec pc's are thousands of times more powerful :)

n!

On the physics vs graphics discussion, if your game has a competitive aspect 60MHz is probably too low for graphics.

Physics and rendering should be decoupled. The simulation's update rate can be quite low in many games. You want it fast enough that players can feel like they are moving freely, but slow enough that all computers can keep up.

The popular update rates have their sources. 60Hz is popular in games because of television standards. Console games and systems come from the standard definition TV era. PAL offers 625 lines @ 50 Hz, NTSC offers 525 lines @ 60 Hz. Since both are interlaced you can get away with half speeds without gamers complaining too much. Modern video standards for HDTV also tend to focus on resolutions at 60 Hz, although many screens and resolutions can go much faster.

That gives the common graphics speeds of 15Hz, 25Hz, 30Hz, 50Hz, and 60Hz.

In competitive gaming and twich-style games, players want all the graphics speed they can get. While it is true a DVI cable's highest resolutions are meant for a 60Hz screen, it can go much higher. 1280x1024 @ 85 Hz or 1280x960 @ 85 Hz are a common setting in the competitive world, which saturates a single DVI cable. Many competitive players buy dual-link DVI monitors that run at 120 Hz or faster, crank up video card speed, and turn down visual quality of everything in favor of speed. The extra images help them aim more quickly and accurately, improving their competitive abilities.

So even if your game physics run at a specific fixed time step (which is common) it should be decoupled from game rendering, which should generally be configurable to run as fast as the hardware allows.

You can't scale all aspects unfortunately unless you create some kind of super prediction rules and even then memory space will limit you. You just can't run Doom3 on a gameboy :-) If something however can run at 28Hz compared to 60 you could potentially scale it appropriately by making larger time jumps or similar but it all has a minimum performance or the user would get garbage to look at.

You can scale in space (resolution), time (samples), but memory and interaction are hard to scale as they usually require a certain amount of solver iterations before they "settle".

I wish there was a function f(time) that just magically gave the answer but most of them require some kind of integration over time.

Your best bet is to limit the amount of objects and to limit costly operations of interactions to be solved less frequently (for example by doing more dynamics for closer objects and less for those far away). Yielding potentially to wrong states but that's what you pay for ultimate scalability.

Thanks everyone for all your replies. I already have my physics fixed at 60Hz, so I guess I'm already doing the right thing. For some reason I was starting to think that I also needed to fix game play logic to 60Hz, but it appears not.

I guess what I'm wondering is if my game logic is running at 100Hz and I need to add a velocity to an object, which will later be simulated by physics at 60Hz, I will need to scale that velocity first by 0.01 right? (1 second / 100Hz)?

BTW, I know this is the multiplayer forum, but my questions are in regards to syncing the game across the network so the simulations appear close enough :-)

This topic is closed to new replies.

Advertisement