Due to the nature of WebSockets, if I am to use sockets within the browser, I am forced to use TCP. However, the game I'm writing is not a first-person shooter; it's a space flight/combat game with emphasis on kinetics (so no instant turn-click-shoot like a FPS). How much of a hit will responsiveness take if I use TCP compared to UDP? Can this still be done well? Thanks.
How much of a hit to performance will TCP be?,
Crossbones+ - Reputation: 13981
Posted 18 March 2013 - 04:47 PM
The biggest issue, really, is that TCP packets will only arrive in-order. Under ideal network conditions, this is peachy, but when conditions are not ideal, TCP deals with packets arriving out-of-order by buffering the incoming packets and waiting for prior missing packets to appear. This includes packets that the network may have dropped, in which case, a time-out period has to expire before the sender re-sends the dropped packet, and the reciever waits all the while. In this kind of scenario, the reciever sees no updates for a very long time (relatively speaking), and then gets a flurry of several packets to respond to all at once. To support this higher-level functionality, TCP itself has additional overhead, making every TCP message frame bigger than a UDP message containing the same data payload.
Most games that use UDP for this sort of thing simply don't re-send dropped packets, and design a protocol that can handle dropping packets now and again. For example, it might send information representing directional changes at a fast rate, backed up by information representing absolute positions at less-frequent rates. Even dropping some of those packets randomly, combined with client-side techniques like dead-reckoning, the movement would appear smooth, although it might be slightly different than the authoritative state held by the server.
You can mimic this sort of thing with TCP just fine, you just have your application drop old packets instead of the network and implement a similarly-robust protocol to handle the "dropped" messages. However, as I explained earlier, you will likely find that applying this approach in TCP experiences greater and more-frequent gaps in its message stream because the TCP stack won't just pass packets through while awaiting others.
Or, you can design a protocol with TCP in mind--one which relies on guaranteed, in-order message delivery--but which works around the greater propensity for message-delay, and "bursty" communications. For example, I can imagine a protocol which buffers recieved messages for playback with a frame-time delay, and which can adjust the number of messages that are reflected in each game-frame in order to smooth-out the "bursty" nature of TCP messages, albeit at the cost of introducing a small amount of latency that should be relatively consistent. Usually, its wild swings in latency that are most harmful to the game experience (and most perceptible), rather than latency which is consistent, even in the sum of consistent latency is equal or even greater than the other kind.
All of that has to be tempered with the real-world network connections you expect--there will be a smaller delta between UDP and TCP messages on a good connection, while there will be a larger delta on a poor connection.
At any rate, I wouldn't dismiss TCP out of hand for the game you describe, or possibly even faster-paced games with the right protocol. If you know the kinds of differences to expect, and you account for them, then you ought to be able to achieve something that satisfies your requirements.
Edited by Ravyne, 18 March 2013 - 04:50 PM.
throw table_exception("(ノ ゜Д゜)ノ ︵ ┻━┻");
Members - Reputation: 106
Posted 19 March 2013 - 08:52 AM
While others have answered the TCP portion very well I will note that the websocket layer may add some delay/buffer depending on implementation.
http://tools.ietf.org/html/rfc6455 -- search for the word "delay"
I think some implementations disable nagle (TCP_NODELAY) by default.