Auto updater approach

Started by
3 comments, last by hplus0603 16 years, 11 months ago
Hi all... after some tries, I'm still wondering what is the best approach to an auto updater program that sends file chunks to multiple clients (hundreds) at the same time. My servers are dedicated linux on 100mbps up/down datarate Right now, when a client connects, and is approved for an update... i start sending 2kbytes file chunks to this client. But I'm always waiting for the client to Acknowledge the packet before sending another 2k packet. Doing this ensures that i will not overflow the server-side net buffer, because I'm not sending more data than the client can "digest". But tend to be really slow. How does websites knows at which speed to send data to the client when you simply download from a webserver ? I'm using TCP here... should safe UDP should be used instead (i have a safe udp mecanism that ensures packet arrival) any lights on this would be appreciated, thanks.
Advertisement
2k is a pretty small packet size. You should be able to safely increase it to 32k.
Here's a thought...

How about simply setting up apache with bandwidth throttling and load balancing?

I mean, it's file downloads. Why re-invent the wheel of such basic topic without a single custom requirement. And those are tried and tested. You should have everything integrated in under a day - 100% guaranteed to work.

If you think you need some custom update mechanism, you're still much better off figuring out a way how to store this information on web server and handle authentication, rather than writing everything from scratch.

Or even designing a web front-end to background process that prepares client specific package, and then serves it through apache.

Your proposed design for example is flawed in sense that total bandwidth for each client is inversely proportional with ping. Someone with ping 50ms but on dial-up would receive more data than someone on gigabit connection but with 250ms ping.
thanks for your answers i will take this into account and see if something better (like Apache) could fit my need
2k per round-trip is very low, because of the latencies involved. And, even worse, if you wait for ack before sending the next packet, you will never get 100% use of the link. You have to, at least, double-buffer the sending. TCP solves this through a sliding window mechanism, btw.

When it comes to pacing TCP, the kernel does that pretty well. There is a buffer per client. When the buffer is full, a call to send() might block, so you can use non-blocking sockets, or select() to avoid that if you are single-threaded.

I second the recommendation for Apache for file downloads, though.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement