[C#.NET Socket]BeginSend behaves unexpected.

Started by
18 comments, last by Annoyed 14 years, 10 months ago
I am trying to create a net library for a multiplayergame. My chosen implementation is to use asyncrhonous socket communication. In my session(s) I use a fixed size send and receive buffer, pinning them to memory in order to marshal/unmarshal structs. Thereby I have also said that my messages are structs. And I do mean plural, since there is a struct per message type. Example: -Chat -Position update -State transitions -etc etc Needless to say that the messages vary in size. In order to be able to handle every type I need a large enough buffer(s). When a struct is marshalled into the buffer i also know the size in bytes and then use the Socket.BeginSend(byte[],int,int ...) call to transmit the marshalled bytes. The start index is always 0 (zero) while length is the size of the struct in bytes. This goes well until I try to send multible successive packets. Lets say I try to send a chat, transition request and then a chat again. 3 successive BeginSend calls. What I have found out via tracing is that the first message is sent and received correctly. The second is also sent correctly but along with it is the third message truncated to fill out the send buffer. Third send then contains the tail of the thrid message. In other words, if buffer has size 256 and all messages have size 200. Then first send would transmit message 1 (200 bytes), second send would transmit message 2 and 56 bytes of message 3 (256 bytes). Then thrid send would contain the remaining 146 bytes of message 3. Result is that unmarshalling 3rd message on receiver side returns jibberish. I have found a workaround and that is to send the entire buffer (256 bytes) each call regardless of message size. I would like to believe that I don't have to. Such that a BeginSend call with a specified length only sends that much bytes. Is this a bug or a feature as they say? :) Or is my approach unappropriate?
Advertisement
TCP Sockets, whether accessed through .NET or C/C++, whether on Windows or Linux, act like streams: they can split and combine data in any way they want when transmitting to the other end of the socket, as long as the order of the data remains unchanged.

Normally, there's a threshold value, which means the socket will wait for up to some arbitrary time (like 1 ms) for you to put additional data into its buffer before sending. This is done to avoid sending lots of small packets (remember that each packet has a certain overhead, so transmitting 10 bytes with 40 bytes packet headers would be inefficient). This splitting and combining can also happen outside of your control as the packets are being routed and forwarded through various systems on their way if you use the internet.

If you need your message to stay intact, either invent some bracketing scheme (eg. prefix any structure you send with its length) or use an UDP Socket. UDP will transmit only whole packets. However, delivery is not guaranteed, so you'll have to verify arrival yourself.
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
Thanks for the reply.
If you are referring to the Nagel algorithm I have disabled it in order to achieve the initially intended behavior.
As in, one packet -> one transmission.
This is assuming that the API call does what it says it does.

I still don't understand why this happens. My own created buffer is just to define the byte sequence to send the underlying system is unaware of its size.
If I send the entire buffer everything works as supposed to byt sending a fraction messes it up (only on successive calls).
The underlying Socket buffer remains at it's default size (8192 bytes).
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.sendbuffersize.aspx
Which in theory would mean that all messages of 200 bytes would be buffered and then sent.
Quote:Original post by Annoyed
As in, one packet -> one transmission.


Even if you mange to do this over TCP at the _sending_ end you still need to realise that because TCP is a stream protocol (as Cygon noted) and you can't make the same assumption at the _receiving_ end. There are no messages, no packets, no arbitary boundaries related to 'send' calls, just data in the order it was transmitted. It works if you fix your data to 256 bytes because both ends now know that valid data only arrives on 256 byte boundaries.

Jansic.
Thanks you just made me realize that the problem might be the receiver BeginReceive call where you also have to specify buffer and length to receive.
http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx

Since I don't know initially what I am receiving I have the specify the full buffer length.

The confusion remains why it happens after the second message and not after the first message ... It has to remain unresolved :)

So the conclusion is that I have to narrow down the buffer as much as possible to reduce "wasted" bandwith. Possibly setup several channel links.

My current implementation has 512 byte chat messages (+header) and some messages are below 100 bytes total. So sending the entire buffer every time inevitable leads to some excess... Back to the thinking box ...
TCP sockets work like files. TCP sockets are streams, not packetized. If you want to parse separate packets, you have to first send a length field, followed by the actual data. When you receive, you have to decode the length field, and if there isn't enough data yet, keep the data in the buffer until more comes in.

The merging of packets can happen at any time, depending on network and system conditions.

For more information, see the Forum FAQ.
enum Bool { True, False, FileNotFound };
Also to make this painfully obvious I'll give you an example.

Client sends "foobar\0" to server and then sends "barbaz\0". Server calls recv and it could get 1 byte or 10 bytes. You don't know until you check the size. So you have to write this data to a buffer as you get it.

The server could do this:
call recv and get a buffer of 1 byte: "f"
call recv and get a buffer of 2 bytes: "oo"

The reason I'm saying it could be just 1 byte is because I've seen people write an unsigned integer or a short to represent the length and do things like:

call recv get buffer of 1 byte and the programmers accidently pulls off 4 bytes to grab the length. Oh no. Kaboom.

So make a buffer that you use to process incoming packets per player. The naive way of doing this is having:
int packetLength = 0;
List<byte> incomingBuffer = new List<byte>();
and then in your recv callback
while((int size = recv(buffer)) != 0){  while(!buffer.EOF ... or something like that)  {    if(packetLength == 0)    {      while(incomingBuffer.Length != 4 && !buffer.EOF)      {        //Write a byte to the incoming buffer      }      if(incomingBuffer.Length == 4)      {        packetLength = read the first 4 bytes of incomingBuffer      }    }    if(packetLength != 0 && incomingBuffer.Length != packetLength)    {      for(read in bytes until buffer.EOF or incomingBuffer.Length == packetLength)      {        // Read a byte from buffer and append it to incomingBuffer      }    }    if(packetLength != 0 && incomingBuffer.Length == packetLength)    {      // In case you like to process packets in your main thread I put a lock      Lock(packetQueueMutex)      {        packetQueue.Insert(incomingBuffer);        packetLength = 0;        incomingBuffer = new List<byte>();      }    }  } }

That's like the super naive version but works for small games.
@hplus0603
Yea the image has sunk in from previous posts and after trying to relate this more precisely to the FAQ.

@Sirisian & hplus0603
What I do is the have the first field of the struct as a byte which constitutes the type of the struct being transmitted.
Thus by peeking the first byte in the buffer I know how many bytes are supposed to follow and I can unmarshall.
This seemed so plausible to my initial assumption. further I'd like to say that it worked flawless until I tried sending successive packets/structs (within a loop or similar). But I suppose it was coincidental.

To make it work in all cases I just have to pass identical fixed length parameters to the BeginSend and BeginReceive calls on the respective sides and this will lead to excessive bytes being transmitted. Then to reevaluate if it still seems plausible :)

All in all I guess in order to keep the varying size packets being marhalled transmitted and then unmarshalled is another argument to use UDP in stead.
I need to stop being lazy ...

Thanks both
Quote:Original post by Annoyed
What I do is the have the first field of the struct as a byte which constitutes the type of the struct being transmitted.
Thus by peeking the first byte in the buffer I know how many bytes are supposed to follow and I can unmarshall.
This seemed so plausible to my initial assumption. further I'd like to say that it worked flawless until I tried sending successive packets/structs (within a loop or similar). But I suppose it was coincidental.
Sounds like you just have a simple bug. Also I find it nice to create my own "packets" so they act as atomic state updates. Since you chose to use structs they might look like:
2 bytes length of packet1 byte struct typeN byte struct1 byte struct typeN byte struct1 byte struct typeN byte struct2 bytes length of packet1 byte struct typeN byte struct


The reason being that sometimes you want to update the full-state of the game lets say with one packet (A single transaction). You might not want to evaluate it over multiple recv calls where the state can be changing between time steps.

Quote:Original post by Annoyed
All in all I guess in order to keep the varying size packets being marhalled transmitted and then unmarshalled is another argument to use UDP in stead.
You can handle the same thing with TCP as I just mentioned. I also gave you the pseudo code for receiving "packets" with a length prefix. When you get a full "packet" you can then process it at that point.

Yes the bug is that I don't consider the possibility of receiving partial packets/structs.
Also I use a fixed buffer in order to use the Marshal tool in the .NET framework. In order to do so you need to pin the byte buffer to memory using the GCHandle.

byte[] buffer = new byte[ size ];GCHandle handle = GCHandle.Alloc( this.buffer, GCHandleType.Pinned );IntPtr ptr = handle.AddrOfPinnedObject();


The only downside is that you have now told the garbage collector to ignore that buffer and you have to call Free() when done.

Now you can just marshal and unmarshal the struct to and from the IntPtr.
In my case:
static class MessagePacker{    internal static int Pack( object package, IntPtr bufferPtr )    {        Marshal.StructureToPtr( package, bufferPtr, true );        return Marshal.SizeOf( package );    }    internal static T UnPack<T>( IntPtr bufferPtr ) where T : struct    {        return ( T )Marshal.PtrToStructure( bufferPtr, typeof( T ) );    }}

This only works if the message is packet into the buffer from index 0 (zero).
So when I receive partial packets also the whole thing gets out of sync.
If i pack a message from the sender and then flush the entire buffer it works. but it isn't pretty and lots of padded bytes being transmitted.

So the fixed buffer thing has to be reevaluated if i want to use the TCP :)

This topic is closed to new replies.

Advertisement