Acceptable input lag

Started by
2 comments, last by Nanook 11 years, 4 months ago
Whats acceptable input lag?

I'm implementing a threaded engine.. but thinking about how I would handle input when its networked.

At the moment its not networked and it goes like this(n is frame number):
n=0: Platform input is read at start of frame. Input system handles the input and sends the results off to the physics system
n=1: Physics system would calculate the new positions
n=2: Render system renders the new position.

Would this be acceptable?

I could make the render system dependent on the physics system and the render system dependent on the input system so the tasks of each system would all run on the same frame, but then if the other threads on my engine are really fast I would defeat the purpose of doing it multi threaded anyways. Whats more realistic would be to make the physics system dependent on the input system so I would reduce it to just one frame lag.

What I'm thinking for a networked game:
n=0: Platform input is read. Then its sent to the server. On the server its also checks if there's incoming input before it starts running any of the threaded tasks. Input system handles the input and sends the results off to the physics system
n=1: Physics system on the server would calculate the new positions.
n=2: Network system on server sends the new positions to clients. Network system on the clients receives the new positions
n=3: Render system renders the new position

There could also be network lag on frame 0 and 2 here so the input lag would be variable. Is this acceptable or do I need some kind of synchronization?

Now with this I could do some of the same dependency tricks to reduce it. I could get the physics update into n=0 and maybe even make the render system dependent on the network system so it is run before the rendering task can start. Reducing it to either;
n=0: input
n=2: render
or;
n0: input
n=1: render

+ the network lag of course..

I guess I want as little lag as possible, but since I will reduce concurrency with this whats acceptable?

Got a lot of answers just by writing this post, but still need some more opinions on this.. Hopefully its readable and clear enough :)
Advertisement
Keep in mind that there can be up to several frames of lag between when you hand off your rendered scene to the video drivers and when the player actually "sees" the image.

Also, bear in mind that you cannot escape network latency by hiding it. It is generally measured in dozens of milliseconds - the roundtrip time of even the speediest cross-internet trip is going to be very noticeable. (See also: why "streaming gaming" services have generally failed to catch on.) You need to anticipate network latency and deal with it in a totally different way than just hoping the player can't feel it.

Now - with those caveats out of the way: in my experience, you can generally burn about 30-50ms (depending on the player) before input latency "feels" bad. At 30Hz, that means you can afford one frame of latency or maybe two if you're not expecting twitch reflexes from your gamers. At 60Hz of course you're in a different boat, and can afford a lot more.



Here's a great experiment you really should try: build a simple game like a Snake skeleton or something, just implement the controls and rudimentary rendering and that's it. Put in a configurable delay buffer between when your input is read and when it affects the game simulation. Slowly dial up the delay until you can perceive it, and note how long you're buffering your input. Now cut that back by about 25% and you have your target :-)

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Acceptable is very relative.

I've seen games that can go as high as 200ms before it even becomes noticeable. I've also played games that are only acceptable when played on a LAN.



Slowly dial up the delay until you can perceive it, and note how long you're buffering your input. Now cut that back by about 25% and you have your target :-)
I like that.

You should be able to come up with some coarse numbers before that. For example, if your physics is running on 10 updates per second, that tells you you're looking in the 20-30ms range. If it updates 4x per second, you're looking in the 200-500ms range.

Keep in mind that there can be up to several frames of lag between when you hand off your rendered scene to the video drivers and when the player actually "sees" the image.


Do you mean the time of swapping the buffers?


Also, bear in mind that you cannot escape network latency by hiding it. It is generally measured in dozens of milliseconds - the roundtrip time of even the speediest cross-internet trip is going to be very noticeable. (See also: why "streaming gaming" services have generally failed to catch on.) You need to anticipate network latency and deal with it in a totally different way than just hoping the player can't feel it.


I guess I should run the updates on both the clients and the server instead and then send out delta packages to controll the clients..


Now - with those caveats out of the way: in my experience, you can generally burn about 30-50ms (depending on the player) before input latency "feels" bad. At 30Hz, that means you can afford one frame of latency or maybe two if you're not expecting twitch reflexes from your gamers. At 60Hz of course you're in a different boat, and can afford a lot more.

Here's a great experiment you really should try: build a simple game like a Snake skeleton or something, just implement the controls and rudimentary rendering and that's it. Put in a configurable delay buffer between when your input is read and when it affects the game simulation. Slowly dial up the delay until you can perceive it, and note how long you're buffering your input. Now cut that back by about 25% and you have your target :-)


That's a interesting idea.. I'll see if I get time to do that..

I guess I could also run some of the update threads at a different hz..

This topic is closed to new replies.

Advertisement