Is it possible to apply local lag to CS architecture?

Started by
6 comments, last by user123456789 9 years, 11 months ago

Hi,

I have been recently researching about local lag technique which to my understanding means that by delaying execution of input events (locally) using timestamps (set to the future by the amount of local lag) is possible to achieve better fairness and consistency in the multiplayer games. The reason being that local lag gives each site enough time to retrieve input events, align timestamps and execute them at correct order/times.

However, while this all sounds tempting, I am having trouble of getting my head around the fact that this technique requires that each site has enough logic to model the execution of events. Therefore, I think that while this is not a problem with P2P, with CS it is because it is the main decision point, including the relevant logic to make the decisions?

So the question is: How would I apply local lag to CS architecture and is it done before? I recall seeing something about local lag being used between two servers so it makes me wonder, also I have a feeling that this could/should be possible - it could be effective if local lag can be used to buy more time for returning correct results from the server and mitigate the inconsistencies tongue.png

Thnx!

-

Advertisement
There are many good open source tools out there to simulate lag and packet loss. Right now clumsy is one of the better tools for windows.

In larger QA groups they will route everything through a PC that serves as a network hub. It takes a bit of configuration, but you can make it so the PC serving as a hub can simulate various conditions. You can introduce high packet loss, data corruption, assorted real-world bandwidths, and more. Flip a switch and one of the machines now communicates as though it were on 56K dialup, with either a clean or dirty line. Press another button and suddenly someone gets 40% packet loss for 10 seconds, maybe to simulate a wifi network and a user shouting "Turn off the microwave, it ruins the wifi signal!".


As for trying to compensate for lag by setting your time stamp in the future, I don't see how that gains anything. The lag still exists.


As for trying to compensate for lag by setting your time stamp in the future, I don't see how that gains anything. The lag still exists.

As I understand it, it's not designed to compensate for lag - it's designed to apply a minimum lag to all players, even those who would otherwise not have lag.

Same idea as Blizzard limiting the original StarCraft to 640x480 for everyone, to prevent unfair advantage from players with higher resolutions seeing more of the map.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


As for trying to compensate for lag by setting your time stamp in the future, I don't see how that gains anything. The lag still exists.

As I understand it, it's not designed to compensate for lag - it's designed to apply a minimum lag to all players, even those who would otherwise not have lag.

Same idea as Blizzard limiting the original StarCraft to 640x480 for everyone, to prevent unfair advantage from players with higher resolutions seeing more of the map.

Thnx for the replies.

Yes. I actually needed to double check reply from frob because it answered to different question which I tried to get answer for. wacko.png

I am not sure whether I was clear enough, but local lag which I am referring to is programmed to your game code to delay processing times of your events to future so that e.g. player with 50ms and 100ms could have same view of the world at the same time. So flow is something like this: "Press input -> Delay processing locally by the amount of local lag (50ms) -> Sent/Receive packets from the network -> Align timestamps and execute all actions at the same time."

Value of local lag could be RTT of most lagged player or some fixed or adaptable value based how much local lag certain game tolerates. This is the stuff which I wonder can it be transferred from P2P world to CS, and if so - do I need to apply local lag at clients and at the server + caveats? Hopefully, I was now clear enough smile.png

-

Sorry about the first reply, I did mis-read it.

It looks like the algorithm tries to estimate the worst case round trip for all clients, then delays the display for all clients based on that worst case. It eliminates the advantage of a fast connection, dropping everyone down to the worst case connection.

In that case, since you are moving to a client/server model which is usually a star topology, the server could directly compute round trip time to each other machine.

So the server would say:

A: 32ms round trip
B: 20ms round trip
C: 92ms round trip

Giving an approximate one direction delay of 16, 10, and 46 milliseconds.

From there, it can do some subtraction from the 46 milliseconds to know how long to delay the work:

A: 46-16=30ms extra delay
B: 46-10=36ms extra delay
C: 46-46=0ms extra delay.

For the choice between client or server, the only sane option is to put the extra delay on the server side. f the server delays sending the data it should all arrive at approximately the same time, removing the exploit possibilities.

I can easily imagine a griefers turning it into a "race to the bottom".

If the delay is continuously adjusted during the game, such a player could do nasty things. Join the game at a natural rate, then turn on a tool like clumsy on their own machine for outbound communication, and suddenly the responsive game becomes very unresponsive for all players. With a bit of care that kind of subtle annoying attack could look like regular Internet noise, a claim "someone else turned on Netflix" could cause problems.

Even worse, if the delay were done on the client rather than the server and also inflated his ping time, the attacker could have the updates for much longer than the other clients, giving an even larger advantage of earlier updates while also impairing the opponents with unresponsive, artificial delays.
Valves Source engine / Counter Strike, which is about the most popular client-server PC FPS game, implements this. There's a full description of their algorithm on their wiki somewhere (sorry I don't have the link handy).
IIRC, every client is running about 100ms in the past, even if their ping is <100ms. The server constantly rewinds and fast-forwards the game-state to insert client triggered events at the right points in time.

I think Battlefield implements it too, but they've not publicly described their algorithm.

frob,

Now we are talking smile.png and by giving those facts out, I think it makes somewhat sense to delay at the server. However, if precise consistency is required I believe that delay at the clients based on timestamp is also necessary when packet is received because the second part of the rtt is still unknown. (so every client sees everything in sync). Ofc. clients already has the data so it could be possible to "hack it visible". But those are just tradeoffs which needs to be made.

If the delay is continuously adjusted during the game, such a player could do nasty things. Join the game at a natural rate, then turn on a tool like clumsy on their own machine for outbound communication, and suddenly the responsive game becomes very unresponsive for all players. With a bit of care that kind of subtle annoying attack could look like regular Internet noise, a claim "someone else turned on Netflix" could cause problems.

And yes, you are right. this is quite possible unsure.png - thats why specifying fixed local lag value not based on estimate on players ping could be more wiser. And that local lag value needs to be measured based on the aspects of individual games. (how much it can tolerate).

I will try to draw it out in sequence diagram and see how it could work.

But, even then at least when using CS architecture - I assume that local lag alone as a compensatory technique is not enough and some prediction or hints for players are also needed... e.g. playing animation at the client/sounds when replies are still hanging at the server. etc.

Valves Source engine / Counter Strike, which is about the most popular client-server PC FPS game, implements this. There's a full description of their algorithm on their wiki somewhere (sorry I don't have the link handy).
IIRC, every client is running about 100ms in the past, even if their ping is <100ms. The server constantly rewinds and fast-forwards the game-state to insert client triggered events at the right points in time.

I think Battlefield implements it too, but they've not publicly described their algorithm.

Hodgman, thnx for the tip.. But I have read their wiki and I cant find anything like that... are you sure you are not mixing that for interpolation? :) they has interp value of 100ms and the server uses lag compensation to "replay" events at the server as they were happening -100ms ago - so bullets can be placed to targets and so on.. Local lag is different because it increases the latency of your key presses to give messages enough time to reach the other participants. smile.png

-

player could do nasty things. Join the game at a natural rate, then turn on a tool like clumsy on their own machine for outbound communication, and suddenly the responsive game becomes very unresponsive for all players
Though I think that a simple modification like delay = min(worst_onedir, 50) - your_onedir would fly quite well for that. Go ahead and introduce a 80 or 100 ms delay on your line, but everybody else will only see 50. You could have the maximum ease in/out, too, to give the individual player even less of a leverage.

Maybe similar to how TCP does it, raising linearly and cutting down by one half, or simply by having the delay shrink and grow linearly but at different rates. As in, the max delay may grow by at most 5ms per "tick" (and up to an absolute maximum) but shrinks by 10ms.

Okey,

I had time to play around with that idea. Below is the diagram which I came up with - at paper it seems to work. Left hand side of the image is the stuff without local lag, and right hand side is the one which I crafted.. Although, it is somewhat unrealistic because lack of game loop rate & server update rate etc., but I dont know if those makes any difference - ofc. they adds more delay. While local lag is done at the server (50ms as a target maximum) the time t could be used to buffer inputs at client side to maintain consistency and display them at the same time at all sides. But as being said, can be hacked.. dunno if its a big deal to gain benefit of few ms. Depends on the game I guess. And I included dashed arrow to represent that packet might get lost and wont arrive on time - in those cases state needs still to be repaired.

local_lag_example.jpg

-

This topic is closed to new replies.

Advertisement