Jump to content
  • Advertisement
Sign in to follow this  
JohnnyCode

data to be read on udp server

This topic is 3270 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hello! I have a question, how do I know how much data is to be read on udp server listening socket? Lets say server has 100 clients but how does the server after select function know how much data has come and should be read? I use only one socket on a udp server, right? Thanks

Share this post


Link to post
Share on other sites
Advertisement
In C++, select() only tells you what sockets are ready to be read from. You would actually receive the data with recv() and recvfrom(), and both of these functions return the number of bytes in the packet.

If you wanted to know how many bytes are waiting before calling recv(), then I believe you may be out of luck.

And yes, normally you only create and bind a single socket in UDP.

Share this post


Link to post
Share on other sites
UDP is packet-based, so each time you call recvfrom, you'll only ever get one message at a time. That means, you can just give it a buffer that is as big as the maximum packet size you're expecting.

The absolute maximum size allowed by UDP is 65,507 bytes for IPv4 (slightly less for IPv6) though you'd be absolutely mad to ever try and send a packet that big - the bigger the packet, the more fragmented it'll get and the more likely one of those fragments will be lost and the whole packet dropped. You're better off limiting your packets to around 1,000 bytes or less.

Share this post


Link to post
Share on other sites
Thanks!
So I guess it is up to me to predict how much messages-packets I should have received. I should know that for I know how much clients I have and how much data they send.

Share this post


Link to post
Share on other sites
Quote:
So I guess it is up to me to predict how much messages-packets I should have received.


You don't need to guess. You either implement a well-defined protocol, where the max packet size will be known, or you define your own protocol, where you can decide what the max packet size should be. The max possible size of a UDP datagram is on the order of 65535 bytes, but generally, packets that large do not perform well on the Internet.

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
The max possible size of a UDP datagram is on the order of 65535 bytes, but generally, packets that large do not perform well on the Internet.

Not only do they not perform well on the internet, but Windows has a default MTU size of 1500 bytes on Ethernet networks (MTU stands for "max transmit unit" and is the size of the largest possible packet that can be transmitted without being automatically broken into multiple parts) and some versions of Windows will silently drop UDP packets greater than that MTU.

So try to design your network to send packets that are smaller than 1500 bytes at a time. Don't forget that IPv4 packet headers are 20 bytes each (IPv6 is 40 bytes if you like programming for the future), and UDP itself has another 8 bytes of overhead, so subtract those from that 1500 byte MTU to know how much you can really fit into a single packet.

...and honestly, in my own code I'm way more conservative than that. I'm not suggesting that you must emulate the 1200 byte MTU I constrain myself to, but it will likely improve your reliability to low-ball the MTU you expect to see.

Edit: I could swear that modern versions of Windows still don't handle large/fragmented UDP packets well, as I know I worked on a tool at work that wasn't receiving large UDP packets on a Windows Vista box, and I could swear there was an updated KB article discussing it, but I can't seem to find a link to the article right now. Maybe it's just late and my head is on crack.

Share this post


Link to post
Share on other sites
I believe the bottom line is that after calling select() on the UDP socket to determine whether you have data waiting to be received, you cannot know the exact size of the packet in question. As such you will, as others have pointed out, define a maximum size that a packet may have in your protocol. Around a kilobyte should be sufficient for most purposes. If you find yourself needing to stuff more data into a packet, consider splitting it up into smaller packets instead.

Here's how you might handle the receiving:

unsigned char buffer[1024];
sockaddr_in fromAddr;
int fromAddrLen = sizeof(fromAddr);
int bytesReceived = 0

bytesReceived = recvfrom(udpSocket, buffer, sizeof(buffer), 0, &fromAddr, &fromAddrLen);

if(bytesReceived < 0)
{
// Handle error - throw exception, exit, etc...
}




Quote:
Original post by Indigo Darkwolf
Not only do they not perform well on the internet, but Windows has a default MTU size of 1500 bytes on Ethernet networks (MTU stands for "max transmit unit" and is the size of the largest possible packet that can be transmitted without being automatically broken into multiple parts) and some versions of Windows will silently drop UDP packets greater than that MTU.


Thanks for sharing! Rating++ :)

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
You don't need to guess. You either implement a well-defined protocol, where the max packet size will be known, or you define your own protocol, where you can decide what the max packet size should be.
And this even isn't as hard as it sounds. When you recv, you get to supply a buffer size. Make that one byte more than the maximum size you want to receive, so you can detect packets that are too big.

Thus, you will receive packets which are either max_size or less, or packets that are max_size + 1. They may actually be much bigger, but that does not matter, the network stack will just throw away the excess data.
If they're max_size+1, you have detected an error. You know that this can't be a legitimate packet of yours (because you know the maximum size and never send packets bigger than that) and therefore you can safely discard it, whatever it is.

Share this post


Link to post
Share on other sites
You can actually know the size of the packet you will receive by using MSG_PEEK. However, that means you enter the network stack two or three times (select, peek, receive) which is bad for performance. And the question still remains: what do you do after you know the size? You still need to deal with the maximum possible message size for your protocol, so to avoid dynamic memory allocation, make each receive buffer at least that size.

Note that UDP is a thin wrapper on top of IP (adding size, checksum, source and destination port fields at 16 bits each only), so fragmentation of a UDP datagram actually happens at the IP layer, not the UDP layer. I don't think modern Windows sliently drops fragmented IP packets, or many things would be broken...

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
I don't think modern Windows silently drops fragmented IP packets, or many things would be broken...

All I can tell you is what the KB articles I linked to say. Even though fragmentation is supposed to happen at the IP layer, some versions of Windows drop all but the last fragment of an oversized UDP packet. Don't ask me why, I've never seen Microsoft's network layer implementation.

I think this is still a relevant issue to be aware of because I know I ran into this within the past couple of years while working on a tool (written in C#, but was not an XNA app) that communicated with an Xbox 360. Specifically, my C# app was receiving UDP packets that contained the back end of truncated portions of my data, as if the front portion was being dropped, which is at least consistent with the KB article I linked to above. I could have sworn I found a current KB article at the time confirming this still happens, but I can't find it now. Even if it's been patched out, I can't imagine MS would have removed the KB article, so I think my Google-fu is just failing me. That, or it was a quirk in the Xbox 360 firmware (probably inherited from whatever Windows code the 360's firmware is based on) that I can't find because I'm on vacation and never bothered to memorize my password to Microsoft's Xbox Developer Support site.

I did manage to Google up a couple other passing mentions about commercial routers dropping UDP datagrams rather than fragmenting them (regardless of the packet's Don't Fragment flag), so I still think the answer is to pick a conservative max packet size.

Depending on your player count and network update frequency, you may not want to be sending kilobyte-sized UDP packets anyways, or you might flood your upstream to the internet. Just some food for thought; if you know your network update rate and how many connections you plan to support, then you should have enough information in this thread to crunch the numbers yourself and get an idea of how much bandwidth your game can consume.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!