• Advertisement
Sign in to follow this  

Multiplayer Game UDP/TCP Combo System

This topic is 3020 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am currently attempting to make my first graphic multiplayer game, but have run into a bit of trouble concerning server design. Background: The game is a 2D top down shooter I originally was going to use TCP because that is what I knew, but then I learned about UDP and it seemed to make more sense for real time game data, so I began to design a system that would use TCP for reliable data (chat messages) and UDP for game states (object positions,rotations,etc.) but ran into a problem with UDP. Since it has no "connections" I found that when testing locally if the server and client were on the same machine they would steal each others messages. The server would send a message but it would be received by the server because it too was listening on that port, or the client receiving its own messages. Then I realized this would pose a similar problem if two people were playing from within a LAN and therefore have the same external IP, how would I send data to either of them individually? These problems make me want to switch to a TCP system but I worry that will cause a slow down in game FPS. Can anyone give me some advice? This is my first time trying this. Note: Currently using boost::asio library for networking on Windows

Share this post


Link to post
Share on other sites
Advertisement
Hi brwarner-

UDP is a fine choice. I'm running the same set you mention (client and server on same host using UDP for communication) and I don't have packet stealing.

You may want to verify each is using a different port! I'm guessing that's the problem here.

-Thomas

Share this post


Link to post
Share on other sites
...I should really have thought of that shouldn't I... :(
Thanks for your help :)

Actually while I'm here I had another question as this is my first multiplayer game programming experience. When it comes to the server, how should it's game loop run? Should it simply run constantly updating the game (i.e updating position based on velocity and time and such) and hence have to send many client updates? This seems a bit innefficient as the clients game loop will be much slower and as a result will get many game state updates in just a single loop. (By run constantly I mean while(isRunning) { Update(); } ).

Also, one last question about UDP. Using different ports for both the client and the server addresses the problem of them "stealing" each other's messages (for instance client listens on port X while server listens on port Y) but what about if two users are behind the same router? Since it is connectionless if I simply send a packet towards them how would it know which computer to route to?

Share this post


Link to post
Share on other sites
Quote:
Original post by brwarner
Then I realized this would pose a similar problem if two people were playing from within a LAN and therefore have the same external IP, how would I send data to either of them individually?

Quote:
Original post by brwarner
but what about if two users are behind the same router? Since it is connectionless if I simply send a packet towards them how would it know which computer to route to?

NAT.

Share this post


Link to post
Share on other sites
I also switched to udp a little time ago.

for important messages you should check for duplicate or lost messages. my application uses an id for each important message. for example host sends

id=42
...message...

when client recieves the message it replies it with. client replies with

42 recieved

if it does not get a reply for a time, it resends the message. also clients compares the id of the message with previous ids. if they already got one with the same id, it is a duplicate so it discards it

I am using winsocks and on the same computer it uses different ports for each client. I think this will be the case for the computers behind the same route

Share this post


Link to post
Share on other sites
Quote:
Original post by mattd
NAT.


I still don't understand how if two computers behind a router are connected to an external server via UDP can receive individual messages... From what I know UDP is connectionless so all I can do is send to their external ip address and wonder which computer it will go to.

Share this post


Link to post
Share on other sites
Messages from two computers behind a NAT will appear to have come from the same IP address, but a different port.

Share this post


Link to post
Share on other sites
Quote:
Original post by brwarner
Quote:
Original post by mattd
NAT.


I still don't understand how if two computers behind a router are connected to an external server via UDP can receive individual messages... From what I know UDP is connectionless so all I can do is send to their external ip address and wonder which computer it will go to.

Looks like you're forgetting that there is a port number associated with each session too.

Maybe this will help. (Sorry for all the Wiki links, but they have good explanations already :])

Share this post


Link to post
Share on other sites
So this is done by the router itself? And all I have to do is respond to the same port as the messages are received and all will be well?

If so thanks, that would make this whole thing make a lot more sense.

Share this post


Link to post
Share on other sites
Yes. It's transparent (as long as you don't doing something silly like get the client to send its (LAN) IP address in a packet, because this will be in a UDP packet payload, not header, and therefore won't be caught and translated by the router. FTP has this problem, see here).

Also, if you're going to run a server behind NAT, you generally have to manually add mappings in the NAT configuration of your router. But then to clients contacting the server externally, once again it's transparent.

Share this post


Link to post
Share on other sites
Ah, that last link made everything come together, thanks!
I had no idea about port translations by the router to manage things, that certainly explains how all the data is sorted to the correct machine. (Originally I was getting the IP address from a previously established TCP connection, and this explains why that didn't work out so well because it wouldn't know about any UDP port translations)

Share this post


Link to post
Share on other sites
Keep in mind that NAT doesn't just affect UDP, but TCP as well. It works in the same way.

Also you generally use an IP address and and a port number to identify any given connection. An IP address alone only identifies a host on the internet, not a connection.

If you want to associate a TCP connection with a UDP session from the same client (i.e. so a client can make a TCP connection, then a UDP session, and have the server reliably tell that the two are from same client), you'll need to send some kind of unique client ID through both connections (e.g., during your game's protocol startup stage). The server can then associate the two to one client, for example by noting the (IP, port) pairs for the TCP connection and UDP session in a single Client struct in some list of clients.

EDIT: Also, before you go down this route of using TCP and UDP, consider making life easier by using a semireliable protocol based on UDP, ideally one already provided by a library like ENet or Raknet. Then you can easily have reliable messaging and unreliable messaging (and ordering guarantees, etc.) over UDP. Check this thread.

[Edited by - mattd on November 2, 2009 12:04:19 AM]

Share this post


Link to post
Share on other sites
It seems there are 2 completely different schools of thought on whether TCP or UDP should be used.

TCP: a reliable connection based protocol - can remain connected, but requires a little more overhead, thus a little slower.

UDP: an unreliable connectionless protocol - faster and less overhead.

Okay, so using UDP means that you have to check message IDs because you are working with a fast, yet unreliable protocol. Compare that method to using TCP. Because TCP is a reliable protocol, you don't need to double check to ensure you aren't receiving a duplicate message.

What I have implemented is that the first 2 bytes of my message give me a length of the incoming message (inclusive of the first 2 bytes ;P). Compare that with the size of the message. We are planning on optimizing data packets to the send/receive buffer size so no packet would ever be larger than the buffer (requiring multiple sends to complete a data packet). So, if the first 2 bytes represent an 800 byte message and all the socket received was 750, I throw it away. I don't bother telling the client or server to resend - if a request is not fulfilled within a given timeframe, it will simply request again.

There can be discussions that go on forever over the advantages/disadvantages of the protocols. I'd actually like to be able to have a reliable, connectionless frickin' protocol :)

Comments welcome...

Share this post


Link to post
Share on other sites
Quote:
Original post by ddboarm
What I have implemented is that the first 2 bytes of my message give me a length of the incoming message (inclusive of the first 2 bytes ;P). Compare that with the size of the message. We are planning on optimizing data packets to the send/receive buffer size so no packet would ever be larger than the buffer (requiring multiple sends to complete a data packet). So, if the first 2 bytes represent an 800 byte message and all the socket received was 750, I throw it away. I don't bother telling the client or server to resend - if a request is not fulfilled within a given timeframe, it will simply request again.
Is that TCP or UDP? Either way, it doesn't seem like it would work...

If you're working in TCP, then even if you only received 750 bytes on this call to recv, the next call to recv will have the remaining 50 bytes (plus part of the next packet, perhaps). TCP is a stream-based protocol - there is no such thing as "packets" (at an application-level anyway). Two calls to send on one end may correspond to one call to recv on the other. Similary, one call to send on one end might result in two calls to recv on the other.

Think about it like writing a file to disk. When you've written a file to disk, you've got no way of knowing where each call to WriteFile stopped and the next one began, right?

The way you can "simulate" packets in TCP is that you write your two-byte packet-size header, then you loop on a call to recv until all the data for that packet has been recieved.

If you're working in UDP, then because it is packet-based, you'll never get partial packets. You either receive the whole thing, or you don't receive anything at all. So there's no point checking a header to see if you've got it all. Essentially, the UDP protocol itself is doing that check for you.

Share this post


Link to post
Share on other sites
Quote:
Original post by Codeka
... TCP is a stream-based protocol - there is no such thing as "packets" (at an application-level anyway).


I referred to it as a packet instead of a message. But I do see what you are saying. In reference to receiving a partial packet, rather than wait (or loop) for the rest of the message, if I receive a message that doesn't have a 2byte length I ignore it. At least that is the process right now. But this is mostly to experiment with inbound traffic and get an idea about how well it works.

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.

Share this post


Link to post
Share on other sites
Quote:
Original post by ddboarm

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.


Except that no API I'm aware of exposes this information, thereby making it impossible to know how full the buffer actually is.

The only way this can happen is to keep send buffer fully congested at all times (WOULDBLOCK or block on send), but that is counter-productive for low-latency applications, since the data is actually delayed due to flooded connection.


Unless you are stuck with TCP for whichever reason, sending messages over UDP using one of established concepts would make this type of tuning considerably easier as well as more deterministic. Each network stack is free to implement its sending heuristics and still remain standard compliant.


OP: Relevant background.

Share this post


Link to post
Share on other sites
for your problem with the game running on the same machine, you just bind the server and client to different ports. Obviously, the server would have a fixed port. But for the client, you can let the system assign you a port (by passing 0 when binding).

That means that the messages directed to the client will be coming through a different port, and you wont have the problem.

Online, you dont have that problem, the router, NAT and DHCP do all this internally.

If you want to run a TCP and UDP communications for both client and server, you'll need two sockets for each.

Also, if you just want to use the TCP just for chat and simple reliable messaging, you might as well just use the UDP socket, and implement a simple reliable protocol for some of your messages (chat messages). A simple message resend and FIFO queue will do the trick.

Everything will be routed through a unique socket and unique protocol (basically, some packet header you parse to extract the reliable and unreliable data). It can make things easier. That's what most games do.

Similarly, games implement virtual connections as well. It's quite simple really, and all part of one big protocol you design to send messages to particular clients and receive replies from them. It's basically a series of connection handshakes, identifying clients via a guid or address, keep alive heartbeat messages, and leave notifications to inform the server which client is connected in the game and where the packets come from and go to.

UDP socket -> packet -> extract messages -> route message to a registered connection -> handle messages.

Gaffer touched on that in his articles. I recommend you read through it as afaik, it's pretty much what I'd do and what everybody elses does. You can also look at the enet library for a simple transport layer (protocols, reliable messages, connections, socket).

EDIT : Antheus ninja'd me. [grin]

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Quote:
Original post by ddboarm

Really, to fully manage TCP (and offset the extra overhead), you must ensure that your send buffer is fully used. We've been experimenting with ways to get the most out of TCP as we can with strict adherence to filling buffers as much as possible, and not splitting messages up over multiple send/receives.


Except that no API I'm aware of exposes this information, thereby making it impossible to know how full the buffer actually is.

The only way this can happen is to keep send buffer fully congested at all times (WOULDBLOCK or block on send), but that is counter-productive for low-latency applications, since the data is actually delayed due to flooded connection.


Unless you are stuck with TCP for whichever reason, sending messages over UDP using one of established concepts would make this type of tuning considerably easier as well as more deterministic. Each network stack is free to implement its sending heuristics and still remain standard compliant.


OP: Relevant background.


From what I understand about the Socket class, you set a buffer size:
from MSDN
- Socket.SendBufferSize Property
Gets or sets a value that specifies the size of the send buffer of the Socket.
- Socket.ReceiveBufferSize Property
Gets or sets a value that specifies the size of the receive buffer of the Socket.

- public IAsyncResult BeginReceive(
byte[] buffer,
int offset,
int size,
SocketFlags socketFlags,
AsyncCallback callback,
Object state
)

where
size
Type: System.Int32
The number of bytes to receive.

- public IAsyncResult BeginSend(
byte[] buffer,
int offset,
int size,
SocketFlags socketFlags,
AsyncCallback callback,
Object state
)

where
size
Type: System.Int32
The number of bytes to send.

I realize that TCP is a Stream - and always 'on' - and therefore nearly improbable that you could keep the 'stream' full without rigorous looping. However, you make the most of each Send/Receive by getting as close to filling the buffer as you can. Maintain message lengths to within the bounds of the buffer sizes. Maintain a developmental adherence to complying with the buffer sizes.


I provided the information above simply to convey that I don't feel I'm going off half-cocked. :)

There are many more sites I researched as I progressed in creating our network server project. I will include UDP - I'm not going to deny our game dev customers the choice of protocols by any means. Frankly, actual implementation of our product, and the methods chosen, are completely left up to the developer. I am simply providing tools for creating a client/server game that don't require programming the actual sockets through the async processes - and is developed to be fully integrated into the game engine.

Share this post


Link to post
Share on other sites
Quote:
Original post by ddboarm
I realize that TCP is a Stream - and always 'on' - and therefore nearly improbable that you could keep the 'stream' full without rigorous looping. However, you make the most of each Send/Receive by getting as close to filling the buffer as you can. Maintain message lengths to within the bounds of the buffer sizes. Maintain a developmental adherence to complying with the buffer sizes.


I'm guessing that you are actually talking about something else - message batching.

In a real scenario, it would work something like this:
for (Object o : objects) {
if (o.isMoving()) {
for (Client c : clients) c.sendMoveMessage(o.x, o.y, o.z);
}
}


This is great, but it leaves message handling and prioritization to generic IP network stack.


Ideally, one wants to keep a (priority) message queue per socket. As messages are send to that socket, they are not sent, but put into this queue.


Socket handler (probably IOCP-based) runs its worker threads, looping over all sockets with non-empty queues. The logic for each worker then looks something like this:

if (!queue.empty()) {
Batch b = new Batch(MAXIMUM_BATCH_SIZE);
while (!queue.empty() && b.hasEnoughRoomFor(queue.first()) {
b.append(queue.pop());
}
socket.send(b.toBytes());
}


In addition, batching mode would take into consideration and WOULDBLOCK errors (indicating that peer is congested, or we are sending too fast), the estimated latency and other fidelity factors (to modify area of interest, frequency of sending, or other higher-level settings) as well as potentially reorganize pending packets by priority, so that perhaps old movement updates are discarded, or that some events are prioritized.


For twitch-style networking, the application would really want to keep amount of data sent so low, that on each call, entire queue gets sent. If it doesn't, the peer is congested and data rate needs to be decreased.

For MMO-like system, queue will be closely cooperating with area of interest management as well as higher level conceptual information, perhaps distance, importance, visibility, to determine which messages, and in which order should be sent (createObject must be sent before updateObject, or none of them should be sent).


None of this is in any way related to socket's send and receive buffers, and that value should be ignored. Relying on certain behavior of those buffers is highly undesirable. However, on each send, the worker should strictly examine the return codes or other information, and handle it accordingly to specification, to determine health and quality of connection, and adjust handling accordingly.

Receive should always be performed immediately, by trying to flush each buffer ASAP. For TCP this is quite important since it prevents TCP protocol's congestion handlers from kicking in and limiting the transfer rates. With many TCP connections, this typically is not an issue, but with UDP leaving the buffer to fill up means packets will be lost.


And, the socket buffer is just one of the buffers involved in networking, it's the only one exposed to user. And under IOCP, user supplied buffer might be used directly instead of socket provided one.

Share this post


Link to post
Share on other sites
Quote:
all I have to do is respond to the same port as the messages are received and all will be well?


Yes. All of this is mentioned in the Forum FAQ which also provides links for further reading.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement