Synchronizing server and client time

Started by
33 comments, last by FredrikHolmstr 12 years, 8 months ago

My question regards this tho: Assume that my round trip time is 100ms, but of this 100ms about 75ms is consumed from the client to the server (which is logical, as most home connections have slower/crappier upload then download)


What you care about is ordering all events in a strictly increasing sequence, not the specific sync to the server. It doesn't matter how the latency is distributed.

Also, your assumption that upload bandwidth matters for client latency isn't really true. Let's do some math, assuming a cable connection with really good downstream and really constrained upstream:

User download == 1 MB / sec
User upload == 20 kB / sec
Client command packet to server == 300 bytes
Server update packet to client == 3000 bytes

So, one packet takes 300 bytes / 20 kB == 15 milliseconds to transmit up, for the first hop (from the client). Note that, if the command packet is smaller, this number changes significantly!
One packet down takes 3000 bytes / 1 MB == 3 milliseconds to transmit down, for the last hop (to the client).
However, as soon as you're outside of the client connection, you're on a routed internet infrastructure, where upload and download throughputs are usually symmetric, and always aggregated among many consumer, and thus a lot faster. At that point, it's the actual speed of electrons in copper (about 2/3 the speed of light) and the routing latency that matters, not the client bandwidth limitations.
Thus, any amount of your latency greater than (15+3) == 18 milliseconds in this case will be evenly divided between "up" and "back," so the maximum you'll be off in your estimate would be (15-3) == 12 milliseconds;

But, as I said, as long as you have a strict ordering of events, all of that doesn't matter much :-)
enum Bool { True, False, FileNotFound };
Advertisement

[quote name='fholm' timestamp='1314446018' post='4854389']
My question regards this tho: Assume that my round trip time is 100ms, but of this 100ms about 75ms is consumed from the client to the server (which is logical, as most home connections have slower/crappier upload then download)


What you care about is ordering all events in a strictly increasing sequence, not the specific sync to the server. It doesn't matter how the latency is distributed.

Also, your assumption that upload bandwidth matters for client latency isn't really true. Let's do some math, assuming a cable connection with really good downstream and really constrained upstream:

User download == 1 MB / sec
User upload == 20 kB / sec
Client command packet to server == 300 bytes
Server update packet to client == 3000 bytes

So, one packet takes 300 bytes / 20 kB == 15 milliseconds to transmit up, for the first hop (from the client). Note that, if the command packet is smaller, this number changes significantly!
One packet down takes 3000 bytes / 1 MB == 3 milliseconds to transmit down, for the last hop (to the client).
However, as soon as you're outside of the client connection, you're on a routed internet infrastructure, where upload and download throughputs are usually symmetric, and always aggregated among many consumer, and thus a lot faster. At that point, it's the actual speed of electrons in copper (about 2/3 the speed of light) and the routing latency that matters, not the client bandwidth limitations.
Thus, any amount of your latency greater than (15+3) == 18 milliseconds in this case will be evenly divided between "up" and "back," so the maximum you'll be off in your estimate would be (15-3) == 12 milliseconds;

But, as I said, as long as you have a strict ordering of events, all of that doesn't matter much :-)
[/quote]

Thanks, again! So you're saying that getting an incredibly correct sync is not terribly important, as long as it's "somewhat" in sync, say -50 to +50ms of the server it's good enough? And yes you are right about the download/upload :)
I've also read the two papers/articles on half life engine lag compensation again, seeing this at the end:


It is assumed in this paper that the client clock is directly synchronized to the server clock modulo the latency of the connection. In other words, the server sends the client, in each update, the value of the server's clock and the client adopts that value as its clock.[/quote]


Quote from: http://developer.val...mization#fnote6

I interpret this as that half life doesn't "add" the roundtrip/2 to the time received by the server, it just lets the client run the same clock as the server, somewhat in the past (server->client transit time) and that's it. Honestly this feels like a much more clear cut option (especially when implementing the type of lag compensation that is explained in the same article). Or am I missing something, again? :)

I interpret this as that half life doesn't "add" the roundtrip/2 to the time received by the server, it just lets the client run the same clock as the server, somewhat in the past (server->client transit time) and that's it. Honestly this feels like a much more clear cut option (especially when implementing the type of lag compensation that is explained in the same article). Or am I missing something, again? :)


So, again: The goal is to make everyone affect the world (execute events) in the same order. As long as that happens, the particular values of the clocks may not matter at all! It's entirely up to your specific networking and simulation model, exactly what you put where.
For example, I've worked on a system where user-controlled objects are run at "server clock plus upstream delay" and remote-controlled objects are run at "server clock minus downstream delay" which ends up with things like the avatar's upper body being run ahead of time (because you can aim a weapon) but the lower body being run after time (because it's slaved to a vehicle driven by someone else).
So, if at this point, you have a system that works, then I suggest you keep it that way until there's evidence that you need to change it :-)
enum Bool { True, False, FileNotFound };
[quote name='hplus0603']So, if at this point, you have a system that works, then I suggest you keep it that way until there's evidence that you need to change it :-)[/quote]

Thanks for all your help, and this was probably the best advice I got in this thread. I got something that works, I need to stop over-thinking it :)

This topic is closed to new replies.

Advertisement