Question about sampling client input

Started by
7 comments, last by Bozebo 12 years, 11 months ago
I'm trying to sample client input (to be sent to a server) at about 20 fps (every 0.05 sec.) whereas my client simulation is running at 60fps. This means that a lot of input samples are skipped when sending to the server (approx. 2/3rds), I could for e.g. press the 'forward' key too fast and it wouldn't get sent yet the client still simulates this event and causes the player to move forward but the server does not. This causes the client to snap to the new position.

Do I need to store extra information in the client->server update packet such as the time of certain keys being pressed etc.? Whats the most common solution to this problem?
Advertisement

Do I need to store extra information in the client->server update packet such as the time of certain keys being pressed etc.? Whats the most common solution to this problem?


Generally you will transmit events and transactions. For example, the first event is "forward was pressed at time X", then sometime later the event "forward was released at time Y"

Generally it is a bad idea to transmit all input. Continuous state information can saturate your communication channels with useless information. Filer it down to just events that are relevant to the other machine, and then send only that information on an as-needed basis.

Generally you will transmit events and transactions
[/quote]

I use a polling system instead, the same as how quake does it and it is what valve uses also.

http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking

Apparently this way is much better for FPS-like games.


Generally it is a bad idea to transmit all input
[/quote]

I'm only sending a small subset of the input, in fact it only takes up one byte in size.

I'm only sending a small subset of the input, in fact it only takes up one byte in size.

If that is all you are transmitting at the time it is likely to be alot more than that, with UDP / TCP overhead.


Generally you will transmit events and transactions


I use a polling system instead, the same as how quake does it and it is what valve uses also.

http://developer.val...ayer_Networking
http://trac.bookofho...uake3Networking

Apparently this way is much better for FPS-like games.
[/quote]
Only "much better" if you know exactly what you are doing, why you are doing it, and can avoid the pitfalls.

The articles seem to make that clear.

The first article (Valve's Source Multiplayer Networking) states one of the most compelling reasons for it was cheat avoidance, not performance. They are very clear that their system introduces higher bandwidth costs, higher memory costs, and additional processing effort. Their conclusion warns not to change anything "unless you are 100% sure what you are doing. Most "high-performance" setting cause exactly the opposite effect". I seriously doubt you or even I fall into that boat of knowing exactly what they are doing with their networking code. More on this below, but your description falls into that boat of your specific choices introducing additional unnecessary and avoidable latency.

The second article (Quake3 networking) is a very light overview of a very complex system that took four years to develop and evolve, written by a few experts intimately familiar with the engine. The model it uses is a variant of sliding windows except that missed packets are not resent. Unless you understand what that is and the reason for those problems that caused their games to be fragile from their 1995 test to the 1999 Quake3 system, you won't have the requisite knowledge to understand the solution.


Remember: KISS.

Do the simplest thing possible to get the job done. A system like you describe is not the simplest solution. Don't "fix" it until you know it is broken, understand the problem and a range of solutions, and have ways to verify that any improvements didn't make the problem worse or introduce new ones.

I'm only sending a small subset of the input, in fact it only takes up one byte in size.
[/quote]
Okay. That means you'll have a horrible signal-to-noise ratio. So bad in fact that many ISPs and major network providers will intentionally throttle your data to improve the SNR.

You described sending the 8 byte UDP header + 20 byte IP header + 38 byte Ethernet header/footer (or larger frames like EoSDH across fiber connections) ... and just one byte of data.

Also it makes me question what you understood of the articles. The first example you gave above was fairly clear that the range should be 130-300 bytes each, depending on the observed throughput. They did not state the reason for going much smaller than that, assuming that those who implement it would be knowledgeable enough to understand it already.

They did not describe why, but the reason is something you can learn from either more reading and a better understanding, and from a bit of experience. If your packets drop much smaller than that you may discover WORSE performance. Many ISPs and broadband providers don't like to be flooded with tiny packets with an obviously poor SNR, so they'll proactively hold them for you. Basically they will hold on to your datagrams and bundle them up into a single packet across their networks. It is completely transparent to the app, but it demonstrates itself to your app by a clear bursting pattern for such rapid updates: Silence, burst, silence, burst, silence, burst. The result is the opposite of the smooth networking you hoped to achieve.


You will save yourself much grief by either going with an established networking library already written by networking experts, or by following the simplest possible networking model and praying you have good karma as you debug it.
You could make all your client-side input events have a 'stickyness' of 50ms (or whatever the network rate).

In Quake/Half-Life, the input system generates events, such as +FORWARD / -FORWARD, which then enable/disable the bits in the input bitmask. You could enqueue and process these events at 50ms intervals. If a +FORWARD and -FORWARD are recieved "at the same time" (i.e. in the same 50ms window), then you only process the "+" event and delay the "-" event to be processed at the next window. This would make the client's prediction scheme update at the same resolution as the server's simulation.

E.g. the client is running at 100fps (10ms update loop).
  • On frame #1 (T+0ms) the client generates a +FORWARD command.
  • On frame #2 (T+10ms) the client generates a -FORWARD command.
  • On frame #5 (T+50ms) the client inspects it's command queue. It sees +FORWARD and -FORWARD as conflicting, so it only processes the first one.
    The client starts predicting it's forward movement, and it sends an input update packet with the 'move forward' bit enabled.
  • On frame #10 (T+100ms) the client inspects it's command queue again. It processes the -FORWARD command.
    It stops predicting it's forward movement, and it sends an input update packet with the 'move forward' bit disabled.

[hr]Alternatively, you could simply queue up all of your input bitfields for 50ms!
If the client is running at 10ms per frame, then once every 50ms they'd send out 5 input bitfields.

[edit] Are you making a 'twitch' style FPS/action game (i.e. one that relies on fast reflexes / low pings)? If so, the Quake/Half-Life model is fine. If not, then a transaction model like Frob suggests is indeed probably more suitable.

Are you making a 'twitch' style FPS/action game (i.e. one that relies on fast reflexes / low pings)? If so, the Quake/Half-Life model is fine. If not, then a transaction model like Frob suggests is indeed probably more suitable.
[/quote]


Yes, a FPS game.


The first article (Valve's Source Multiplayer Networking) states one of the most compelling reasons for it was cheat avoidance, not performance
[/quote]

Thats exactly why I'm using this method, otherwise I could just simulate the collision solely on the client and update the server with the position data, but this could be hacked really easily.


The model it uses is a variant of sliding windows
[/quote]

Yes, the server just sends essentially the entire state of the game every so often although it is delta compressed for the last acked packet by the client.


I've re-factored my code now so that it sends packets with events, such as 'add player' event etc. rather than just sending a (delta compressed) array of players each frame. I do agree that It seems a lot more elegant and probably requires less code but I've still got the problem of sequencing and packet loss. Although the latter is fairly straightforward, but what if for example you send a 'add player' event and then the player disconnects so you need to send a 'remove player' event but the 'remove player' event reaches the client first? this is what I mean by sequencing.
It seems to be working a bit smoother now (I had some problems elsewhere in my code). But the player movement still feels a bit jerky at some points. My interpolation code looks a bit like this (client side):

[source]
interpPos += (timeStep / averageClientUpdateTime);
if(interpPos > 1.0f)
{
interpPos = 1.0f;
}

cur.position = MathEx::lerp(prev.position, target.position, interpPos);
[/source]

I added "averageClientUpdateTime" to smooth the movement a bit more. The client update time is supposed to be 0.05 but It is averaging at about 0.07 at the moment. The jerkyness is caused by the conditional statement in my code "if(interpPos > 1.0)" but this is required so that the player does not overstep the mark (i.e. extrapolate) which causes wild oscillations.

also, I was meaning to ask... do many games using the client-server model perform collision detection on the client as well as the server? I had implemented it this way at first but I then tried turning off collision on the client and it seems to be working fine.

also, I was meaning to ask... do many games using the client-server model perform collision detection on the client as well as the server? I had implemented it this way at first but I then tried turning off collision on the client and it seems to be working fine.


In source and unreal I believe it does movement physics/collision on both sides. The client assumes it's movements were accepted but can be "bungeed" to where the server dictates it should be (happens a lot when 'lagging'). This works flawlessly until the player touches another player or non-static geometry, then determinism can be used to improve the results in those situations - but network latency is always a problem.

In games like WoW, collision is only done client side.

If collision is only done server side, the client has to wait for a response before it can move - this will be horrible if the latency is not very small.
this gaffer on games article is very helpful

To be honest, you can probably get away with only doing collision detection on client side, unless you are releasing the game to a large audience. There are so many other ways to cheat in games that fully breaks the experience that something which seems like a huge cheat oppertunity isn't actually that big a deal - the gameplay is already broken if somebody uses a wallhack or aimbot in a fps, being able to fly around in noclip due to client-side only physics just makes it more obvious. Server-side collision detection is required when there are moving collidable objects.

This topic is closed to new replies.

Advertisement