Server to client file download... UDP or FTP?

Started by
7 comments, last by azherdev 16 years, 9 months ago
I am trying to decide on whether or not to transfer files required for my program using the UDP (RakNet reliable ordered packets) or the tranditional FTP way. I like the UDP solution since it allows me to use the same architecture as the rest of my networking within the program. I like FTP since it is an established protocol that has many vendors that provide the servers (my udp file server will only have me). Anyone have thoughts on pro's and con's for either? I am 80% sure I will use FTP unless someone presents a strong argument against it. Oh, I am using a seperate thread that sits and downloads files on as-needed basis, so neither methods would block my program from running.
Advertisement

.

Thanks Kada2k6,

I am not implementing anything really. I am using an already existing lib for the ftp functionality. I don't want to re-invent any wheels so to speak. :)

Ok, so HTTP file transfer is an option. So,

1. UDP using RakNet (reliable packet sending)
2. FTP using Chilkat FTP2
3. HTTP using ??? (haven't researched it yet)

Which of these would break under heavy load? Again, I'm leaning heavily towards FTP because of its widespread use and security.
I suggest sticking with FTP. Why? Already implemented servers, plenty of client side libs to help you out. Pretty well behaved protocol/service (as opposed to making your own). Requires just an FTP server (don't need to run a web server just for downloading files)... I don't see any benefit from using HTTP. FTP has features for resuming downloads, not sure about standard HTTP. There is also a secure form of FTP if you need security. But, then there is also SCP in that case.

Plus, benefits of using TCP (which includes Web servers/HTTP) streams is that they are easier to load balance than UDP type protocols - since you ask about heavy loads.

my 2 cents.
FTP is an old and clunky protocol. It doesn't even work through default NAT unless you use passive mode. It doesn't support re-starts. The servers are notoriously insecure.

Rather, I would use HTTP. It's designed explicitly for shuttling data from a server to a client. It supports get-if-newer semantics, and it supports partial files (re-start after disconnect). Some libraries to use HTTP are libcurl (very full featured), or HTTP-GET (very minimalist).
enum Bool { True, False, FileNotFound };
UDP for file transfer? Isn't that exactly what UDP is no good for?

Why not just use a TCP stream. It's a stream protocol, afterall.

Or dish it off to an HTTP server. Apache is a simple install and configuration. It's not overly resource or CPU hungry, either. At least, I've never noticed it to be. I run it on a lot of my workstations, and I usually forget it's there.
"Creativity requires you to murder your children." - Chris Crawford
Hmm, I think I will look into HTTP way. Simply because port 80 is pretty open everywhere in firewalls and any website can host the files whereas FTP might be restricted and you need to find a host for them.

The files are not secret files and they will be less than 50K - 500K each.

Any thoughts of why NOT to use HTTP? Is there any additional overhead or speed hit vs FTP? I read on various posts that HTTP is better for small files and FTP is better for large files - urban legend or true? I will be requesting dozens of small files many times so I won't have any 20Meg file downloads.

Thank you all for your input.
Quote:Any thoughts of why NOT to use HTTP? Is there any additional overhead or speed hit vs FTP? I read on various posts that HTTP is better for small files and FTP is better for large files - urban legend or true? I will be requesting dozens of small files many times so I won't have any 20Meg file downloads.


Per-byte overhead of HTTP per session is bigger than same for FTP.

But the year is 2007, where people view TV shows online, where torrents serve for real-time Hollywood blockbuster distribution, and where youtube has more viewers than any national cable network.

FTP and telnet should be considered obsolete these days. SSH and SCP are their modern counterparts.

For mass distribution, HTTP will do just fine.
Quote:UDP for file transfer? Isn't that exactly what UDP is no good for?


Those BitTorrent users out there might beg to differ?

Regarding FTP vs HTTP, HTTP has the benefit of being able to serve partial files, so you can resume a download. FTP... not so much.

There's no reason to build anything on FTP these days, and there's generally no reason not to use HTTP for bulk data transfer.
enum Bool { True, False, FileNotFound };
Thank you for your suggestions. After speaking with few IT friends, I believe I will stick to HTTP as my primary source. Because I want short session, on demand, small file downloads ( 50K - 500K ), FTP (connection, request, download, disconnection) doesn't make sense. The whole 2 port-per-user is rubbing me the wrong way too - for ftp.

Plus, load balancing and hosting HTTP "datastore" is a bit easier than having the same setup for FTP server.

This topic is closed to new replies.

Advertisement