# .Net Sockets - Writing too fast?

This topic is 3281 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm using the following code to write to a .Net TcpClient Socket:
public static void Send(this TcpClient client, Byte[] sendBytes)
{
if (client != null && client.Connected == true && sendBytes != null && sendBytes.Length > 0)
{
Object sendObject = MAILUtility.Deserialize(sendBytes);

NetworkStream stream = client.GetStream();

if (stream != null)
{
Byte[] lengthBytes = new Byte[sendBytes.Length + 2];
lengthBytes[0] = (Byte)sendBytes.Length;
lengthBytes[1] = (Byte)(sendBytes.Length >> 8);

Buffer.BlockCopy(sendBytes, 0, lengthBytes, 2, sendBytes.Length);

stream.Write(lengthBytes, 0, lengthBytes.Length);
}
}
}


It works well, and the client can seems to be able to handle anything thrown at it. It takes each "message" off of the "stack" (I use quotes because I'm never quite sure of the terminology) by checking the message's size, reading that many bytes, and then leaving the rest to be processed on the next pass. I'm pretty sure the client code:
if (m_Client.Connected == true)
{
NetworkStream stream = m_Client.GetStream();

Byte[] buffer = new Byte[100000];

Int32 count = stream.Read(buffer, 0, buffer.Length);
while (count > 0)
{
Int32 remaining = count;
Int32 processed = 0;
while (remaining > 0)
{
Int32 messageSize = buffer[processed] | (buffer[processed + 1] << 8);

Buffer.BlockCopy(buffer, 2 + processed, receivedBytes, 0, messageSize);

{
}

processed += messageSize + 2;
remaining -= processed;
}

}
}


Is working correctly, it seems to handle messages received quickly, even when they've all piled up into one. But while the client can handle everything coming in, I don't think the server can handle everything going out. I say this because when the client first connects to the server, the server should send 5 messages in rapid succession, but only 3 are received. The reason I think the client is at fault is because if I set a breakpoint on the server and slowly send each message one at a time, everything gets received. However, if I set a breakpoint on the client and wait for everything to queue into one message, it only detects the first three messages. I'm not positive, but stepping through the client code and watching everything, I'm pretty sure that it is behaving properly. So, unless anyone can see that I've missed anything on the client side, is there a problem with the way I have my server side send code set up? Is NetworkStream.Send an asynchronous call and I need to wait or something? It has no events on it, so I don't see how it could be. All of these initial messages are sent on the same thread (though as the server runs that isn't the case, but this seems to be the only time where messages are getting missed). I'll provide more information as needed. Thanks.

##### Share on other sites
If you receive a partial message (which is quite possible), you will (a) get a load of junk in the message in question (because you're copying messageSize bytes from the other buffer even if they weren't there), and (b) cause the second part of the message to be read as the length of the next message, and so on forwards, permanently corrupting the connection.

Messaged TCP is not so simple as it looks ;).

I don't see any problems with the code for reading multiple complete messages, although it's late so I might be missing something.

##### Share on other sites
So what's the solution to the partial message problem?

And should I maybe be considering something else to handle this type of communication, rather than using TCP? I've considered using a web service, but I don't see that working well for server to client communciations. Perhaps WCF has something for me?

I'm not excited about replacing my communications system, because, except for this bug, and the one just pointed out (which I'm yet to experience), it works quite well. That said, I'm open to moving to another solution if it would offer greater performance or usability without a tremendous overhaul.

##### Share on other sites
You could rely on some delimiter sequence instead of sending separate packets. It's a bit of a pain to format binary data but you can gain a lot of performance by batch sending small messages.

##### Share on other sites

Quote:
 Original post by KazeYou could rely on some delimiter sequence instead of sending separate packets. It's a bit of a pain to format binary data but you can gain a lot of performance by batch sending small messages.

So I would append some character (or characters) to the end of each packet, and then I would scan each byte received until I find that character, and then take everything until that point?

If that's the case, is there an easy way to append arrays of bytes onto each other? Maybe I'll just keep them as a queue that the receive thread appends to and that some thread inspects and dissects as appropriate?

##### Share on other sites
Quote:
 is there an easy way to append arrays of bytes onto each other?

Generally, you pre-create an array that's bigger than what you initially need, and keep a separate variable to keep track of how much is used. If you don't want to do it yourself, then the built-in List<> class does that for you, but it hides some access to the array, so you can't easily do things like Buffer.Copy() on it.

##### Share on other sites
Quote:
 Original post by CyberSlag5kSo I would append some character (or characters) to the end of each packet, and then I would scan each byte received until I find that character, and then take everything until that point?

pretty much, for example make 128 the special control byte and as soon as hit will read the next byte to figure out what to do,
128 0 = end of message
128 128 = normal 128

if your sending text instead of binary data you can make 0 or any other non valid char you delimiter and not have to filter them out before sending

alternatively you could start a message with a short indicating the length of the message

for storing until your ready you could just use List<byte> and .ToArray()

##### Share on other sites

the way I am doing it right now (and this may not be a very good way of doing this but it seems to work) is, I do a update every so often to clients that need it so I have a large buffer 2048 bytes, Untill update time I just append all my data to that buffer, at the update I spew everything I can at the client, and re add what the client did not receive back to my buffer.

but my system doesn't really handle lost packets :/

I have some checks I do on the data received and if its bad I clear the buffer thats about it :/

##### Share on other sites
You won't "lose" data on a tcp/ip connection without losing the connection itself. It's possible that you'll send too much data, I suppose, but I imagine that'd throw an exception under .NET.

The problem (as was said above) is that you're assuming you get full packets - don't do that. Make sure you've gotten your full packet size before just running with it, and if you haven't, store what you've got so far and fill it out the rest of the way the next time you read data.

TCP/IP connections act as streams - there is no correlation between the amount of data written at one time, and what will be read at one time on the receiving end.

if you have enough data...
handle full packet
otherwise...
store what you've got so far, as well as how much more you expect
wait til the next read to finish

That's about it. Don't assume that you have the packet size as a whole, either, even if it's only two bytes :)

Also, consider using streams to handle reading/writing from/to the stream instead of just using byte buffers. Chances are that the performance implications will be minimal, and the chances that you mess something up are far fewer. If you had tried to read data off of the end of a memory stream, the stream would have probably caught your error.

Basically, instead of reading as much as possible at one time, read the int off of the stream, then read the number of bytes, and make sure you've got what you want.