Help understanding the concept of Input Buffering

Started by
4 comments, last by GBN 4 years, 7 months ago

Hello everyone,

I am fairly new to programming and gamedev having only just started at the beginning of this year. My goal is to make a multiplayer physics platformer.

One of the videos I have watched over and over is the GDC - Networking for Physics Programmers by Glenn Fiedler. Even after doing so I just cannot seem to fully grasp the concept and result of input buffering.

When a client sends input to the server for the server to run the physics simulation, if we implement a buffer on the server for lets say 100ms will this not add to the delay between what we see on client and server? If it takes 50ms for the server to receive input from the client and then the server buffers it for another 100ms will that not bring the rendering of what the server and what the client sees further apart?

I feel like I am misunderstanding something because 6 minutes in or so Glenn shows his simulation after implementation of input buffering and the outcome is the opposite of what I would think it would be, it is almost identical on both ends even with UDP 250ms round trip and 25% packet loss. I would think that input buffering only solves jittering and the delay between client and server simulation is resolved by client-side prediction, am I wrong on this?

 

Thank you in advance!

Advertisement

He explains with more detail in his blog (which is down, using the web archive)

Your concerns about latency are correct. There's something more to say about that: you can pretend the player is at a different location.

If you download the youtube video and watch if frame by frame, you'll see that his deterministic video with TCP @ 100ms round trip and 1% packet loss is 100ms out of phase!!! It's just the big cube that is at the right location!

The big cube is being rendered at a location it's not supposed to. It's actually lagging behind in the simulation too. The rest of the cubes are lagging behind, but the player's location is being predicted.

 

If it's an action game, you need to do the same for other players, otherwise your players will shoot at a location the enemy is not there anymore. Deterministic lockstep for action games is a terrible idea though. It's more suited for slower games like strategy games with many hundreds of units.

 

Quote

I feel like I am misunderstanding something because 6 minutes in or so Glenn shows his simulation after implementation of input buffering and the outcome is the opposite of what I would think it would be, it is almost identical on both ends even with UDP 250ms round trip and 25% packet loss. I would think that input buffering only solves jittering and the delay between client and server simulation is resolved by client-side prediction, am I wrong on this?

He's using an UDP implementation in that part of the video. His implementation is very tolerant to 25% packet loss because every packet contains a sequence of inputs of past frames that haven't been acknowledged. So even if 1 every 4 packets doesn't arrive, the other 3 packets have redundancy and contain what the 1st lost packet was carrying (and if the server is fast enough, it can catch up by simulating two frames in one frame, but beware that introduces jitter if not smoothed out well).

As for the latency, again he is using client side prediction to display the big cube (remember what you see is not reality. In the client side simulation, the cube is in a different location; but it is rendered in a predicted position to hide the latency), if you watch it frame by frame, you'll see the small cubes are out of sync by a lot.

Hey Matias,

Thank you so much for clearing that up and for the tip on that article in the web archive.

I did not notice the inconsistencies of the smaller cubes, only the consistency of the larger one. There was no reference to client side prediction so I had assumed the only thing that had changed between his previous simulation was the input buffer.

 

35 minutes ago, Matias Goldberg said:

If it's an action game, you need to do the same for other players, otherwise your players will shoot at a location the enemy is not there anymore. Deterministic lockstep for action games is a terrible idea though. It's more suited for slower games like strategy games with many hundreds of units.

 

It is an action game taking inspiration from Stick Fight and as of right now I have snapshot interpolation implemented (un-optimized). I am currently trying to learn and work through all of my problems as they come along. It seemed that the input buffer would be needed regardless, and probably even a snapshot buffer for the same reasoning?

 

49 minutes ago, Matias Goldberg said:

(and if the server is fast enough, it can catch up by simulating two frames in one frame, but beware that introduces jitter if not smoothed out well).

 

So the server would have to process twice as fast as the information it receives?

 

44 minutes ago, Matias Goldberg said:

As for the latency, again he is using client side prediction to display the big cube (remember what you see is not reality. In the client side simulation, the cube is in a different location; but it is rendered in a predicted position to hide the latency), if you watch it frame by frame, you'll see the small cubes are out of sync by a lot.

 

I will do more research on this rather than bombard you with more questions now that I have proper direction. My initial goal was to try and get both simulations as close to matching as possible but I was just looking at the wrong solution for that. Time to learn about extrapolation it seems!

 

Thank you again for the response, very helpful.

 

2 hours ago, GBN said:

It is an action game taking inspiration from Stick Fight and as of right now I have snapshot interpolation implemented (un-optimized). I am currently trying to learn and work through all of my problems as they come along. It seemed that the input buffer would be needed regardless, and probably even a snapshot buffer for the same reasoning?

I honestly can't remember right now exactly the reasons for using an input buffering. However the overal concept always boils down to 'dampening': Smoothening out 'bumpiness' or 'jerkiness' by introducing latency so that we don't starve out of inputs or receive them all together.

This concept applies to rendering, input, networking, etc.

Think of trying to drink from a malfunctioning tap water: It sends huge bursts, then nothing, then huge bursts again. You can't drink from that!

You attach a bottle to the tap, wait to let it fill a little, then make a tiny hole in the bottle from where you can drink a constant, steady stream.

2 hours ago, GBN said:

So the server would have to process twice as fast as the information it receives?

Only if you decide to solve that problem in the way I described (only applies to deterministic simulations). If the server can't catch up, then players may experience 'pauses' (jitter) or 'slowdowns' (e.g. run a video at 0.75x speed).

Note that 25% packet drop doesn't necessarily mean 0.75x playback rate, unless the server is super slow (i.e. it can barely hit the target rate) and you're actually sending your packets very spaced apart. If you send more packets, you compensate for the packet drop.

This is a tricky thing, TCP assumes that if packets are being dropped, it's because servers are overwhelmed and cannot process it, hence improving packet sending rate makes things worse (thus you should send less). But in reality packets can also get drop because routes disappear, wifi signals are noisy, or Cablemodem/DSL cables have noise in them.

Also TCP is sent in order and if a few packets are lost, TCP stops the whole thing and waits until it receives the missing packets. It's like saying "everyone silence! I want to hear it slowly, say it again"; whereas Glenn's method never stops because it includes redudancy (packet C includes the info from packets A and B, so if packet C arrives then everything can proceed; eventually the client gets notified A,B & C have been acknowledged thus it should stop sending them in packet E; and you should only stop the whole thing if a lot of packets have gone unacknowledged which either means the other end is dead, or the connection is extremely poor)

20 hours ago, Matias Goldberg said:

Think of trying to drink from a malfunctioning tap water: It sends huge bursts, then nothing, then huge bursts again. You can't drink from that!

You attach a bottle to the tap, wait to let it fill a little, then make a tiny hole in the bottle from where you can drink a constant, steady stream.

That was a PERFECT analogy for understanding this better.

 

20 hours ago, Matias Goldberg said:

Only if you decide to solve that problem in the way I described (only applies to deterministic simulations). If the server can't catch up, then players may experience 'pauses' (jitter) or 'slowdowns' (e.g. run a video at 0.75x speed). 

That makes sense. So if the server is behind and thus gives players nothing recent to correct their position with I imagine this is where jitter would occur, even with interpolation.

 

Thanks again for your responses.

This topic is closed to new replies.

Advertisement