I am running a TCP connection which will need to be sending large chunks of data infrequently. I have a buffering system so that if the TCP buffer becomes full the data will be stored until it can properly be sent. However, my question is, what happens if the sender''s connection is much faster than the receiver''s? I assume this would result in the receiver getting flooded, unless TCP has some way of dealing with it automatically? If not, what measures can I take to avoid the problem? I can see that one solution would be to allow the client to define a data rate, but then there''s the question of making that rate optimal... and if the rate was too high, the connection might begin to flood? If anyone can point me in the right direction here I''d greatly appreciate it. Also, if anyone can recommend a book that deals with these sort of issues, I''d be most interested to hear about it.
This isn''t a detail you need to worry about under TCP. TCP automatically throttles back transmissions when packet loss is signalled (or an ICMP source quench message is sent). So the average transmission rate that your sender will transmit will be at about 75% the rate that the receiver can receive. Or if the bottleneck is somewhere in between, about 75% of that.
Thanks very much SiCrane. I figured there could be something like that in TCP, given all the other things that it controls for the connection, but was unsure as to how it would operate. Should save me messing around with such things anyway. Is that knowledge a result of personal experience, or is there a good information source for such networking questions?
Another question. Say I send some data on a non-blocking socket, and immediately call closesocket on that socket. If the data has not already been sent at the time the socket is "closed", is that data ignored, and the socket closed, or is all the data sent before the socket is actually removed?