Multiplayer Game UDP/TCP Combo System

Started by
18 comments, last by hplus0603 14 years, 5 months ago
Ah, that last link made everything come together, thanks!
I had no idea about port translations by the router to manage things, that certainly explains how all the data is sorted to the correct machine. (Originally I was getting the IP address from a previously established TCP connection, and this explains why that didn't work out so well because it wouldn't know about any UDP port translations)
Advertisement
Keep in mind that NAT doesn't just affect UDP, but TCP as well. It works in the same way.

Also you generally use an IP address and and a port number to identify any given connection. An IP address alone only identifies a host on the internet, not a connection.

If you want to associate a TCP connection with a UDP session from the same client (i.e. so a client can make a TCP connection, then a UDP session, and have the server reliably tell that the two are from same client), you'll need to send some kind of unique client ID through both connections (e.g., during your game's protocol startup stage). The server can then associate the two to one client, for example by noting the (IP, port) pairs for the TCP connection and UDP session in a single Client struct in some list of clients.

EDIT: Also, before you go down this route of using TCP and UDP, consider making life easier by using a semireliable protocol based on UDP, ideally one already provided by a library like ENet or Raknet. Then you can easily have reliable messaging and unreliable messaging (and ordering guarantees, etc.) over UDP. Check this thread.

[Edited by - mattd on November 2, 2009 12:04:19 AM]
It seems there are 2 completely different schools of thought on whether TCP or UDP should be used.

TCP: a reliable connection based protocol - can remain connected, but requires a little more overhead, thus a little slower.

UDP: an unreliable connectionless protocol - faster and less overhead.

Okay, so using UDP means that you have to check message IDs because you are working with a fast, yet unreliable protocol. Compare that method to using TCP. Because TCP is a reliable protocol, you don't need to double check to ensure you aren't receiving a duplicate message.

What I have implemented is that the first 2 bytes of my message give me a length of the incoming message (inclusive of the first 2 bytes ;P). Compare that with the size of the message. We are planning on optimizing data packets to the send/receive buffer size so no packet would ever be larger than the buffer (requiring multiple sends to complete a data packet). So, if the first 2 bytes represent an 800 byte message and all the socket received was 750, I throw it away. I don't bother telling the client or server to resend - if a request is not fulfilled within a given timeframe, it will simply request again.

There can be discussions that go on forever over the advantages/disadvantages of the protocols. I'd actually like to be able to have a reliable, connectionless frickin' protocol :)

Comments welcome...

Quote:Original post by ddboarm
What I have implemented is that the first 2 bytes of my message give me a length of the incoming message (inclusive of the first 2 bytes ;P). Compare that with the size of the message. We are planning on optimizing data packets to the send/receive buffer size so no packet would ever be larger than the buffer (requiring multiple sends to complete a data packet). So, if the first 2 bytes represent an 800 byte message and all the socket received was 750, I throw it away. I don't bother telling the client or server to resend - if a request is not fulfilled within a given timeframe, it will simply request again.
Is that TCP or UDP? Either way, it doesn't seem like it would work...

If you're working in TCP, then even if you only received 750 bytes on this call to recv, the next call to recv will have the remaining 50 bytes (plus part of the next packet, perhaps). TCP is a stream-based protocol - there is no such thing as "packets" (at an application-level anyway). Two calls to send on one end may correspond to one call to recv on the other. Similary, one call to send on one end might result in two calls to recv on the other.

Think about it like writing a file to disk. When you've written a file to disk, you've got no way of knowing where each call to WriteFile stopped and the next one began, right?

The way you can "simulate" packets in TCP is that you write your two-byte packet-size header, then you loop on a call to recv until all the data for that packet has been recieved.

If you're working in UDP, then because it is packet-based, you'll never get partial packets. You either receive the whole thing, or you don't receive anything at all. So there's no point checking a header to see if you've got it all. Essentially, the UDP protocol itself is doing that check for you.
Quote:Original post by Codeka
... TCP is a stream-based protocol - there is no such thing as "packets" (at an application-level anyway).


I referred to it as a packet instead of a message. But I do see what you are saying. In reference to receiving a partial packet, rather than wait (or loop) for the rest of the message, if I receive a message that doesn't have a 2byte length I ignore it. At least that is the process right now. But this is mostly to experiment with inbound traffic and get an idea about how well it works.

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.
Quote:Original post by ddboarm

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.


Except that no API I'm aware of exposes this information, thereby making it impossible to know how full the buffer actually is.

The only way this can happen is to keep send buffer fully congested at all times (WOULDBLOCK or block on send), but that is counter-productive for low-latency applications, since the data is actually delayed due to flooded connection.


Unless you are stuck with TCP for whichever reason, sending messages over UDP using one of established concepts would make this type of tuning considerably easier as well as more deterministic. Each network stack is free to implement its sending heuristics and still remain standard compliant.


OP: Relevant background.
for your problem with the game running on the same machine, you just bind the server and client to different ports. Obviously, the server would have a fixed port. But for the client, you can let the system assign you a port (by passing 0 when binding).

That means that the messages directed to the client will be coming through a different port, and you wont have the problem.

Online, you dont have that problem, the router, NAT and DHCP do all this internally.

If you want to run a TCP and UDP communications for both client and server, you'll need two sockets for each.

Also, if you just want to use the TCP just for chat and simple reliable messaging, you might as well just use the UDP socket, and implement a simple reliable protocol for some of your messages (chat messages). A simple message resend and FIFO queue will do the trick.

Everything will be routed through a unique socket and unique protocol (basically, some packet header you parse to extract the reliable and unreliable data). It can make things easier. That's what most games do.

Similarly, games implement virtual connections as well. It's quite simple really, and all part of one big protocol you design to send messages to particular clients and receive replies from them. It's basically a series of connection handshakes, identifying clients via a guid or address, keep alive heartbeat messages, and leave notifications to inform the server which client is connected in the game and where the packets come from and go to.

UDP socket -> packet -> extract messages -> route message to a registered connection -> handle messages.

Gaffer touched on that in his articles. I recommend you read through it as afaik, it's pretty much what I'd do and what everybody elses does. You can also look at the enet library for a simple transport layer (protocols, reliable messages, connections, socket).

EDIT : Antheus ninja'd me. [grin]

Everything is better with Metal.

Quote:Original post by Antheus
Quote:Original post by ddboarm

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.


Except that no API I'm aware of exposes this information, thereby making it impossible to know how full the buffer actually is.

The only way this can happen is to keep send buffer fully congested at all times (WOULDBLOCK or block on send), but that is counter-productive for low-latency applications, since the data is actually delayed due to flooded connection.


Unless you are stuck with TCP for whichever reason, sending messages over UDP using one of established concepts would make this type of tuning considerably easier as well as more deterministic. Each network stack is free to implement its sending heuristics and still remain standard compliant.


OP: Relevant background.


From what I understand about the Socket class, you set a buffer size:
from MSDN
- Socket.SendBufferSize Property
Gets or sets a value that specifies the size of the send buffer of the Socket.
- Socket.ReceiveBufferSize Property
Gets or sets a value that specifies the size of the receive buffer of the Socket.

- public IAsyncResult BeginReceive(
byte[] buffer,
int offset,
int size,
SocketFlags socketFlags,
AsyncCallback callback,
Object state
)

where
size
Type: System.Int32
The number of bytes to receive.

- public IAsyncResult BeginSend(
byte[] buffer,
int offset,
int size,
SocketFlags socketFlags,
AsyncCallback callback,
Object state
)

where
size
Type: System.Int32
The number of bytes to send.

I realize that TCP is a Stream - and always 'on' - and therefore nearly improbable that you could keep the 'stream' full without rigorous looping. However, you make the most of each Send/Receive by getting as close to filling the buffer as you can. Maintain message lengths to within the bounds of the buffer sizes. Maintain a developmental adherence to complying with the buffer sizes.


I provided the information above simply to convey that I don't feel I'm going off half-cocked. :)

There are many more sites I researched as I progressed in creating our network server project. I will include UDP - I'm not going to deny our game dev customers the choice of protocols by any means. Frankly, actual implementation of our product, and the methods chosen, are completely left up to the developer. I am simply providing tools for creating a client/server game that don't require programming the actual sockets through the async processes - and is developed to be fully integrated into the game engine.

Quote:Original post by ddboarm
I realize that TCP is a Stream - and always 'on' - and therefore nearly improbable that you could keep the 'stream' full without rigorous looping. However, you make the most of each Send/Receive by getting as close to filling the buffer as you can. Maintain message lengths to within the bounds of the buffer sizes. Maintain a developmental adherence to complying with the buffer sizes.


I'm guessing that you are actually talking about something else - message batching.

In a real scenario, it would work something like this:
for (Object o : objects) {  if (o.isMoving()) {    for (Client c : clients) c.sendMoveMessage(o.x, o.y, o.z);  }}


This is great, but it leaves message handling and prioritization to generic IP network stack.


Ideally, one wants to keep a (priority) message queue per socket. As messages are send to that socket, they are not sent, but put into this queue.


Socket handler (probably IOCP-based) runs its worker threads, looping over all sockets with non-empty queues. The logic for each worker then looks something like this:
if (!queue.empty()) {  Batch b = new Batch(MAXIMUM_BATCH_SIZE);  while (!queue.empty() && b.hasEnoughRoomFor(queue.first()) {    b.append(queue.pop());  }  socket.send(b.toBytes());}


In addition, batching mode would take into consideration and WOULDBLOCK errors (indicating that peer is congested, or we are sending too fast), the estimated latency and other fidelity factors (to modify area of interest, frequency of sending, or other higher-level settings) as well as potentially reorganize pending packets by priority, so that perhaps old movement updates are discarded, or that some events are prioritized.


For twitch-style networking, the application would really want to keep amount of data sent so low, that on each call, entire queue gets sent. If it doesn't, the peer is congested and data rate needs to be decreased.

For MMO-like system, queue will be closely cooperating with area of interest management as well as higher level conceptual information, perhaps distance, importance, visibility, to determine which messages, and in which order should be sent (createObject must be sent before updateObject, or none of them should be sent).


None of this is in any way related to socket's send and receive buffers, and that value should be ignored. Relying on certain behavior of those buffers is highly undesirable. However, on each send, the worker should strictly examine the return codes or other information, and handle it accordingly to specification, to determine health and quality of connection, and adjust handling accordingly.

Receive should always be performed immediately, by trying to flush each buffer ASAP. For TCP this is quite important since it prevents TCP protocol's congestion handlers from kicking in and limiting the transfer rates. With many TCP connections, this typically is not an issue, but with UDP leaving the buffer to fill up means packets will be lost.


And, the socket buffer is just one of the buffers involved in networking, it's the only one exposed to user. And under IOCP, user supplied buffer might be used directly instead of socket provided one.
Quote:all I have to do is respond to the same port as the messages are received and all will be well?


Yes. All of this is mentioned in the Forum FAQ which also provides links for further reading.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement