Sign in to follow this  
JohnnyCode

data to be read on udp server

Recommended Posts

hello! I have a question, how do I know how much data is to be read on udp server listening socket? Lets say server has 100 clients but how does the server after select function know how much data has come and should be read? I use only one socket on a udp server, right? Thanks

Share this post


Link to post
Share on other sites
In C++, select() only tells you what sockets are ready to be read from. You would actually receive the data with recv() and recvfrom(), and both of these functions return the number of bytes in the packet.

If you wanted to know how many bytes are waiting before calling recv(), then I believe you may be out of luck.

And yes, normally you only create and bind a single socket in UDP.

Share this post


Link to post
Share on other sites
UDP is packet-based, so each time you call recvfrom, you'll only ever get one message at a time. That means, you can just give it a buffer that is as big as the maximum packet size you're expecting.

The absolute maximum size allowed by UDP is 65,507 bytes for IPv4 (slightly less for IPv6) though you'd be absolutely mad to ever try and send a packet that big - the bigger the packet, the more fragmented it'll get and the more likely one of those fragments will be lost and the whole packet dropped. You're better off limiting your packets to around 1,000 bytes or less.

Share this post


Link to post
Share on other sites
Quote:
So I guess it is up to me to predict how much messages-packets I should have received.


You don't need to guess. You either implement a well-defined protocol, where the max packet size will be known, or you define your own protocol, where you can decide what the max packet size should be. The max possible size of a UDP datagram is on the order of 65535 bytes, but generally, packets that large do not perform well on the Internet.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
The max possible size of a UDP datagram is on the order of 65535 bytes, but generally, packets that large do not perform well on the Internet.

Not only do they not perform well on the internet, but Windows has a default MTU size of 1500 bytes on Ethernet networks (MTU stands for "max transmit unit" and is the size of the largest possible packet that can be transmitted without being automatically broken into multiple parts) and some versions of Windows will silently drop UDP packets greater than that MTU.

So try to design your network to send packets that are smaller than 1500 bytes at a time. Don't forget that IPv4 packet headers are 20 bytes each (IPv6 is 40 bytes if you like programming for the future), and UDP itself has another 8 bytes of overhead, so subtract those from that 1500 byte MTU to know how much you can really fit into a single packet.

...and honestly, in my own code I'm way more conservative than that. I'm not suggesting that you must emulate the 1200 byte MTU I constrain myself to, but it will likely improve your reliability to low-ball the MTU you expect to see.

Edit: I could swear that modern versions of Windows still don't handle large/fragmented UDP packets well, as I know I worked on a tool at work that wasn't receiving large UDP packets on a Windows Vista box, and I could swear there was an updated KB article discussing it, but I can't seem to find a link to the article right now. Maybe it's just late and my head is on crack.

Share this post


Link to post
Share on other sites
I believe the bottom line is that after calling select() on the UDP socket to determine whether you have data waiting to be received, you cannot know the exact size of the packet in question. As such you will, as others have pointed out, define a maximum size that a packet may have in your protocol. Around a kilobyte should be sufficient for most purposes. If you find yourself needing to stuff more data into a packet, consider splitting it up into smaller packets instead.

Here's how you might handle the receiving:

unsigned char buffer[1024];
sockaddr_in fromAddr;
int fromAddrLen = sizeof(fromAddr);
int bytesReceived = 0

bytesReceived = recvfrom(udpSocket, buffer, sizeof(buffer), 0, &fromAddr, &fromAddrLen);

if(bytesReceived < 0)
{
// Handle error - throw exception, exit, etc...
}




Quote:
Original post by Indigo Darkwolf
Not only do they not perform well on the internet, but Windows has a default MTU size of 1500 bytes on Ethernet networks (MTU stands for "max transmit unit" and is the size of the largest possible packet that can be transmitted without being automatically broken into multiple parts) and some versions of Windows will silently drop UDP packets greater than that MTU.


Thanks for sharing! Rating++ :)

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
You don't need to guess. You either implement a well-defined protocol, where the max packet size will be known, or you define your own protocol, where you can decide what the max packet size should be.
And this even isn't as hard as it sounds. When you recv, you get to supply a buffer size. Make that one byte more than the maximum size you want to receive, so you can detect packets that are too big.

Thus, you will receive packets which are either max_size or less, or packets that are max_size + 1. They may actually be much bigger, but that does not matter, the network stack will just throw away the excess data.
If they're max_size+1, you have detected an error. You know that this can't be a legitimate packet of yours (because you know the maximum size and never send packets bigger than that) and therefore you can safely discard it, whatever it is.

Share this post


Link to post
Share on other sites
You can actually know the size of the packet you will receive by using MSG_PEEK. However, that means you enter the network stack two or three times (select, peek, receive) which is bad for performance. And the question still remains: what do you do after you know the size? You still need to deal with the maximum possible message size for your protocol, so to avoid dynamic memory allocation, make each receive buffer at least that size.

Note that UDP is a thin wrapper on top of IP (adding size, checksum, source and destination port fields at 16 bits each only), so fragmentation of a UDP datagram actually happens at the IP layer, not the UDP layer. I don't think modern Windows sliently drops fragmented IP packets, or many things would be broken...

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
I don't think modern Windows silently drops fragmented IP packets, or many things would be broken...

All I can tell you is what the KB articles I linked to say. Even though fragmentation is supposed to happen at the IP layer, some versions of Windows drop all but the last fragment of an oversized UDP packet. Don't ask me why, I've never seen Microsoft's network layer implementation.

I think this is still a relevant issue to be aware of because I know I ran into this within the past couple of years while working on a tool (written in C#, but was not an XNA app) that communicated with an Xbox 360. Specifically, my C# app was receiving UDP packets that contained the back end of truncated portions of my data, as if the front portion was being dropped, which is at least consistent with the KB article I linked to above. I could have sworn I found a current KB article at the time confirming this still happens, but I can't find it now. Even if it's been patched out, I can't imagine MS would have removed the KB article, so I think my Google-fu is just failing me. That, or it was a quirk in the Xbox 360 firmware (probably inherited from whatever Windows code the 360's firmware is based on) that I can't find because I'm on vacation and never bothered to memorize my password to Microsoft's Xbox Developer Support site.

I did manage to Google up a couple other passing mentions about commercial routers dropping UDP datagrams rather than fragmenting them (regardless of the packet's Don't Fragment flag), so I still think the answer is to pick a conservative max packet size.

Depending on your player count and network update frequency, you may not want to be sending kilobyte-sized UDP packets anyways, or you might flood your upstream to the internet. Just some food for thought; if you know your network update rate and how many connections you plan to support, then you should have enough information in this thread to crunch the numbers yourself and get an idea of how much bandwidth your game can consume.

Share this post


Link to post
Share on other sites
Quote:
Original post by Indigo Darkwolf
Quote:
Original post by hplus0603
I don't think modern Windows silently drops fragmented IP packets, or many things would be broken...

All I can tell you is what the KB articles I linked to say. Even though fragmentation is supposed to happen at the IP layer, some versions of Windows drop all but the last fragment of an oversized UDP packet. Don't ask me why, I've never seen Microsoft's network layer implementation.
According to the KB you linked, it only drops the first UDP datagram you send and only when no ARP entry exists for the receiving host. So if you send one packet under 1500, then subsequent packets will be OK. Also, that article only applies up to Windows 2000 anyway.

The thing you have to remember is, if you're sending large packets, then it's going to get broken up by the IP layer anyway. So if you try and send a 15KB packet, then it's going to get broken up into 15 packets at the IP layer. If just one of those 15 "sub"-packets is dropped, UDP will unable to reassemble the original packet and drop the whole thing. So the larger your packet size, the higher the probability that the packet will be dropped.

In general, for most real-time applications, smaller packets are more desirable anyway.
Quote:
Original post by Indigo Darkwolf
I did manage to Google up a couple other passing mentions about commercial routers dropping UDP datagrams rather than fragmenting them
This is something I've heard of as well. As I said, smaller packet sizes are usually the better idea anyway (well, within reason... 1 byte packets would be equally dumb [smile])

Share this post


Link to post
Share on other sites
Quote:
Original post by Indigo Darkwolf
In C++, select() only tells you what sockets are ready to be read from. You would actually receive the data with recv() and recvfrom(), and both of these functions return the number of bytes in the packet.

And yes, normally you only create and bind a single socket in UDP.


thanks for replies!
This is my server concept:
client can send two types of datagrams, both comes in 1 packet.
Thanks to buffer size and max packet size I do not need to know how big the packet is and fetch it whole, I just then need to interpret the data correctly becouse of datagram data type.
All theese datagrams are to be red on my server listening socket.

So after I read the first datagram, do I call select function again to see wheather there is another datagram on the socket and repeat this until I have red all data from the socket? Or should I rather set unblocking state and repeat recvfrom until I get WOULDBLOCK error message? What is a better approach?


Share this post


Link to post
Share on other sites
The beauty of UDP is that you don't necessarily have to do anything the like. Data arrives in packets, and either you get a packet or you don't. There is no "repeat until all data has arrived" like with TCP. When a different packet arrives, that's a different entity, but again it will be complete or it won't arrive at all.

Also, you don't even necessarily need select at all. It depends on your implementation of course, but in principle, you can get away perfectly well by just blocking on recvfrom. When a packet arrives, the complete data including sender's address etc. is already there when your thread wakes up, and while no packets arrive, you don't consume any CPU cycles at all. You also only have one syscall instead of two or three, and none of the overhead associated with copying file descriptors to kernel space etc.
For a game, which will usually want send out some data periodically even if nothing is received, you will probably not want to block the main thread, but do this in a separate thread, but that's just an implementation detail.

Share this post


Link to post
Share on other sites
Quote:
Original post by samoth
For a game, which will usually want send out some data periodically even if nothing is received, you will probably not want to block the main thread, but do this in a separate thread, but that's just an implementation detail.

Needless multithreading, boo hiss. :P
/* initialization */
socket s;
s = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
ioctlsocket(s, FIONBIO, 0); // non-blocking socket FTW

sockaddr_in local_addr;
local_addr.sin_family = AF_INET;
local_addr.sin_addr.s_addr = inet_addr("127.0.0.1");
local_addr.sin_port = htons(9001); // it's over 9000! what, 9000?! just youtube for it.
bind(s, (SOCKADDR*)(&local_addr), sizeof(local_addr));

/* elsewhere... */
char packet_buffer[1024];
int bytes_received;
while(true) {
/* pretend "socket s" still exists in this scope because I made it a
static member of the compilation unit or passed it into the current
function... or something. */

bytes_received = recv(s, packet_buffer, 1024, 0);

if( bytes_received == SOCKET_ERROR ) {
break;
}

/* process packet here */
}


Disclaimer: I am not a fan of multithreading, unless I know I have multiple cores, and have run out of CPU time on my main thread, and have already spent time optimizing the stuff that's time-consuming on the main thread. And even then I'll look towards what's actually time consuming before I multithread my networking. But then, I am not a fan of multithreading.

Edit: Other disclaimer: If you happen to work on a platform that doesn't support non-blocking UDP sockets... then I guess you have no choice. Thread carefully!

Share this post


Link to post
Share on other sites
Thanks!
I think I am getting it. This is my server.
Server will have a list of structures, each for one client that the server serves to. Server expects to receive the packets for each client in a way that if it reads num-of-clients game packets, all the players in the list have their new state.
Those players that did not get updated are marked "connection idle". After this server immediately starts to read packets until it has red them all or reads until num-of-clients packets are received again. During this second read server updates players that are marked "connection idle"(also along with other players) if their state comes in those packets. If a player keeps "connection idle" for more than 100 server peek iterations the player is marked as "connection offline". This double read of game packets also promises for server to have more up-to-date data after the peek if server cpu goes bound.
My server then uses player states list to compute answers to clients.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this