Posted 28 April 2000 - 07:55 AM
I am running a TCP connection which will need to be sending large chunks of data infrequently. I have a buffering system so that if the TCP buffer becomes full the data will be stored until it can properly be sent. However, my question is, what happens if the sender''s connection is much faster than the receiver''s? I assume this would result in the receiver getting flooded, unless TCP has some way of dealing with it automatically? If not, what measures can I take to avoid the problem? I can see that one solution would be to allow the client to define a data rate, but then there''s the question of making that rate optimal... and if the rate was too high, the connection might begin to flood? If anyone can point me in the right direction here I''d greatly appreciate it. Also, if anyone can recommend a book that deals with these sort of issues, I''d be most interested to hear about it.