Jump to content

  • Log In with Google      Sign In   
  • Create Account


What if a client can't run at desired fixed time step?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 wilberolive   Members   -  Reputation: 227

Like
0Likes
Like

Posted 03 March 2014 - 06:48 AM

I keep reading about fixing your time step to help improve synchronization when dealing with movement and such (i.e. velocities) across a multiplayer game. I notice that most people seem to pick something like 60Hz as their fixed time step. What happens though if you have a client that cannot physically run that fast, such as an old laptop that can only manage to run at 28Hz for example?

 

Would you just run as fast as possible in this case with the delta time taking care of things in equations? Isn't this just the same then as variable time step? Therefore is it really "fixed time step as long as the client can attain it, otherwise variable time step?"



Sponsor:

#2 nfactorial   Members   -  Reputation: 727

Like
3Likes
Like

Posted 03 March 2014 - 07:06 AM

You pick a time step that can be managed on your lowest spec target machine, if someone runs the app on a machine with a lower spec then it isn't supported. So, it comes down to the developer choosing what spec they are targeting at and writing the game code so that it fits.

 

Options available include reducing or completely disabling non-essential systems based on machine spec. etc.

 

n!



#3 wilberolive   Members   -  Reputation: 227

Like
0Likes
Like

Posted 03 March 2014 - 07:59 AM

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?

#4 hplus0603   Moderators   -  Reputation: 5098

Like
0Likes
Like

Posted 03 March 2014 - 10:47 AM

What happens though if you have a client that cannot physically run that fast


Then your machine is below the minimum acceptable spec for the game.

Seems pretty high if that needs to account for the minimum spec?


Not really. Physics typically needs to run at a high rate, to avoid "tunneling" through thin walls and the like. Graphics, however, does not have a typical "minimum frame rate" (although below 5 Hz is going to be painful to play most games.)

Additionally, most machines have ample CPU (and perhaps RAM) but there are orders of magnitude of difference in GPU power. Thus, it makes sense to fix physics at some rate that's high enough for solid simulation, and high enough that "typical" vsync render frame rates will end up displaying one step per frame. On low-spec machines, not all simulated physics steps will be rendered, but that's typically better than the alternative.
enum Bool { True, False, FileNotFound };

#5 Olof Hedman   Crossbones+   -  Reputation: 2703

Like
0Likes
Like

Posted 03 March 2014 - 11:09 AM


Graphics, however, does not have a typical "minimum frame rate" (although below 5 Hz is going to be painful to play most games.)

 

Just an OT nitpick, but graphics update definitely matter for many games, any action game where fast reactions are needed (like for example a car racing game) needs a high and stable graphics update too, to be playable.

 

Or at least the enjoyment factor increase a lot smile.png


Edited by Olof Hedman, 03 March 2014 - 11:10 AM.


#6 SeanMiddleditch   Members   -  Reputation: 4735

Like
0Likes
Like

Posted 03 March 2014 - 12:29 PM

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?


Keep in mind this is the timestep just for physics updates. AI/gameplay and graphics and run at lower rates. You do have to be careful of the "spiral of doom" where physics takes up too much time so it has to run multiple steps next frame, making it take up even more time, until your game hangs. One option is to cap physics update time so if the machine can't cope with the workload, the game just slows down (so one physics second != one real second).

The idea of a fixed time step is just that the time the physics update uses is a fixed, non-variable rate. How that rate maps to real time is a different issue.

#7 nfactorial   Members   -  Reputation: 727

Like
0Likes
Like

Posted 03 March 2014 - 01:42 PM

So many articles and forum posts I've read seem to pick 60Hz as their desired fixed time step? Seems pretty high if that needs to account for the minimum spec? Surely there is something else going on here?

 

What kind of machines are you worried about running on? Games ran at 50 fps (PAL) on my old zx spectrum, even low spec pc's are thousands of times more powerful :)

 

n!



#8 frob   Moderators   -  Reputation: 19757

Like
2Likes
Like

Posted 03 March 2014 - 01:47 PM

On the physics vs graphics discussion, if your game has a competitive aspect 60MHz is probably too low for graphics.

 

Physics and rendering should be decoupled. The simulation's update rate can be quite low in many games. You want it fast enough that players can feel like they are moving freely, but slow enough that all computers can keep up.

 

The popular update rates have their sources. 60Hz is popular in games because of television standards. Console games and systems come from the standard definition TV era. PAL offers 625 lines @ 50 Hz, NTSC offers 525 lines @ 60 Hz. Since both are interlaced you can get away with half speeds without gamers complaining too much. Modern video standards for HDTV also tend to focus on resolutions at 60 Hz, although many screens and resolutions can go much faster.

 

That gives the common graphics speeds of 15Hz, 25Hz, 30Hz, 50Hz, and 60Hz.

 

In competitive gaming and twich-style games, players want all the graphics speed they can get. While it is true a DVI cable's highest resolutions are meant for a 60Hz screen, it can go much higher. 1280x1024 @ 85 Hz or 1280x960 @ 85 Hz are a common setting in the competitive world, which saturates a single DVI cable. Many competitive players buy dual-link DVI monitors that run at 120 Hz or faster, crank up video card speed, and turn down visual quality of everything in favor of speed. The extra images help them aim more quickly and accurately, improving their competitive abilities.

 

So even if your game physics run at a specific fixed time step (which is common) it should be decoupled from game rendering, which should generally be configurable to run as fast as the hardware allows.


Edited by frob, 03 March 2014 - 01:59 PM.

Check out my personal indie blog at bryanwagstaff.com.

#9 zzzz....   Members   -  Reputation: 88

Like
0Likes
Like

Posted 03 March 2014 - 02:49 PM

You can't scale all aspects unfortunately unless you create some kind of super prediction rules and even then memory space will limit you. You just can't run Doom3 on a gameboy :-) If something however can run at 28Hz compared to 60 you could potentially scale it appropriately by making larger time jumps or similar but it all has a minimum performance or the user would get garbage to look at.

 

You can scale in space (resolution), time (samples), but memory and interaction are hard to scale as they usually require a certain amount of solver iterations before they "settle".

 

I wish there was a function f(time) that just magically gave the answer but most of them require some kind of integration over time.

 

Your best bet is to limit the amount of objects and to limit costly operations of interactions to be solved less frequently (for example by doing more dynamics for closer objects and less for those far away). Yielding potentially to wrong states but that's what you pay for ultimate scalability.


Edited by jbadams, 15 April 2014 - 02:19 AM.
Restored post contents from history.


#10 wilberolive   Members   -  Reputation: 227

Like
0Likes
Like

Posted 03 March 2014 - 05:41 PM

Thanks everyone for all your replies. I already have my physics fixed at 60Hz, so I guess I'm already doing the right thing. For some reason I was starting to think that I also needed to fix game play logic to 60Hz, but it appears not.

 

I guess what I'm wondering is if my game logic is running at 100Hz and I need to add a velocity to an object, which will later be simulated by physics at 60Hz, I will need to scale that velocity first by 0.01 right? (1 second / 100Hz)?

 

BTW, I know this is the multiplayer forum, but my questions are in regards to syncing the game across the network so the simulations appear close enough :-)


Edited by wilberolive, 03 March 2014 - 05:42 PM.


#11 hplus0603   Moderators   -  Reputation: 5098

Like
0Likes
Like

Posted 03 March 2014 - 06:31 PM

if your game has a competitive aspect 60MHz is probably too low for graphics


Even if you divide that by 1,000,000, I think you're buying into a little too much hype ;-)
(I'm assuming you meant 60 Hz frame rate here.)

Console first person shooters (with competitive online matches) often run graphics/physics at 30 Hz, and networking at 12 Hz or 10 Hz, and still seem to be mightily popular.

Also, if it's competitive, then the competitive players will have nice graphics cards and CPUs, and thus the min-spec experience isn't necessarily what they will compare to.

When it comes to graphics, you can also scale back a LOT on quality in favor of speed -- simpler shaders, opaque instead of translucent foliage, lower display resolution, fewer (or no) shadows, lower-resolution textures, etc. It all depends on what your actual bottleneck is.
enum Bool { True, False, FileNotFound };

#12 Olof Hedman   Crossbones+   -  Reputation: 2703

Like
0Likes
Like

Posted 04 March 2014 - 03:04 AM


Console first person shooters (with competitive online matches) often run graphics/physics at 30 Hz, and networking at 12 Hz or 10 Hz, and still seem to be mightily popular.

 

I think he means esport, not just competitive multiplayer. 

On console, you are limited by the sluggish control scheme anyhow, and as long as everyone have the same limitation, its ok. 

 

Sure, the game doesn't break down as much as if the physics where to be updated less, but high framerate does a lot for almost any title imo...  

Even Hearthstone plays nicer and is nicer to look at in "high" setting, where its not capped to 30fps. (no other difference what I can see between those settings)

 

I'm a bit sad that so many in the industry seem to think that "30 fps is enough".. It really isn't imho.

Only reason movies get away with very low framerates like that is because of massive amounts of motion blur...

 

Motions get so much more fluid in 60fps+



#13 hplus0603   Moderators   -  Reputation: 5098

Like
0Likes
Like

Posted 04 March 2014 - 10:51 AM

high framerate does a lot for almost any title imo


Oh, I agree! In this thread, though, we're talking about how to support old laptops with single-core CPUs and Intel GMA 950 graphics, and other such low-end devices. There's lots of them out there.

If you want to fix your time step to 60 Hz for simulation, but render faster, you can do that by treating the simulation as an input to an interpolator or extrapolator for rendering. Or you can run simuation at 100 or 120 Hz -- as long as you have a well optimized physics system, and don't do crazy n-squared algorithms between thousands of NPCs, it's likely that most PCs used for playing games these days will keep up. (Not true if you're using JavaScript in web browsers on ChromeBooks / phones.)

As part of making a game, you need to decide what kind of experience you want to deliver, and then what kind of platform will support that experience, and tune those selections based on the technology, time, and skill available for implementation, to reach a cohesive deliverable. You can do a whole lot of tuning and optimizing here, with various fallback paths, etc -- at the cost of not getting to spend that time on other things that may make your game "better" for whatever the "core player" is for the game. Your choice!

Edited by hplus0603, 04 March 2014 - 10:53 AM.

enum Bool { True, False, FileNotFound };

#14 frob   Moderators   -  Reputation: 19757

Like
0Likes
Like

Posted 04 March 2014 - 03:56 PM

Ah, yes, I was referring to the sport-style environment, not "friends on LIVE" environment. When I write of the competitive environment, I am referring to the people (including one of my brothers) who routinely compete in semi-pro game leagues and competitions. These are often run by the various companies, sponsored by hardware vendors, and usually end up inviting all the semi-final competitors to Las Vegas from around the globe for the final few rounds of web-broadcast events. They also involve prizes of cash and hardware. (My brother just finished one such tournament that offered a $100,000 cash prize to the winning team, they took fourth with each player in his team getting $5000 in addition to the all-expenses-paid vacation.) The competition rigs that people bring to these events are rather impressive pieces of hardware, often with every piece of non-essential software removed and the entire machine dedicated to that one game's optimum performance. The networks provided are also often rather intense with very low latency network switches and similar cards in most boxes.


The main takeaways from my post were basically that (1) reduced coupling between unrelated components is usually a good thing, and (2) the update frequencies of all the systems should be based on what is best for the game, not because of the old TV standards.

Update as fast as you need to give a good simulation, whatever that means for your game.
Draw as fast as reasonable for your game. Obviously a network simulation of go or chess will have different display requirements than a twitch shooter.
Many game styles benefit if you decouple players from each other, some work well with completely independent simulations with just a few key pieces kept synced.

Finally, figure out update rates that make sense for your game. If 60 Hz makes sense for your simulation, then great. If instead 15 Hz makes sense for your simulation, or 85, or some other number, then that is what is best for you. If your game has AI that runs at 4 hz, player input that is processed at 30 hz, and a physics world that gets simulated at 120 Hz and that works for you, then great, whatever works for you.
Check out my personal indie blog at bryanwagstaff.com.

#15 Hodgman   Moderators   -  Reputation: 28574

Like
0Likes
Like

Posted 04 March 2014 - 06:22 PM

FWIW, it's possible to play 'professional' counter-strike on a client doing rendering/input/prediction at 120Hz, but a server that's only simulating and forwarding network data at 20Hz wink.png

They do constantly rewind their simulation to inject inputs at the appropriate times though...






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS