Sign in to follow this  
savail

Loading big files to send them through network

Recommended Posts

savail    333
Hey,
I have to send some files which size is reaching above 1 Mb. And I'm wondering how big packets should I send? Of course I can't create one big array to store > 1 000 000 chars or maybe dynamically I can (not sure) but is that a good solution? Network library that I'm using (Enet) is doing whole fragmentation for me so I don't have to worry about size of packet but I guess I have to divide the packet by myself into smaller parts as well? So how big should those parts be? does the size of those parts matter regarding performance?

Share this post


Link to post
Share on other sites
Khatharr    8812
This depends a lot on the situation of the machine running the code. How much memory is available, how fast is the connection, etc. Another factor is whether or not you want something like a progress bar or some other method of updating the user on the progress of the upload. If you've got plenty of memory available then a 1MB buffer is certainly not too large. If your chunks are too small then you're going to end up spending too much cpu time doing excessive loads and sends rather than actually letting the system send the data. On the other hand dealing in enormous chunks of data doesn't really help you much since the send will have to wait continually for the NIC's output buffer to clear, so you'd basically just be using more memory than you need to for no benefit.

You can create an array to store over a million bytes if you want. A megabyte is actually 1024 * 1024 bytes (1,048,576). In hex that's 0x100000 (five zeroes) which can be a little easier to remember. Saying something like:

const size_t MEGABYTE = 0x100000;
const size_t BUFFER_LENGTH = 5 * MEGABYTE;
char buffer[BUFFER_LENGTH];

You should have no problem on a desktop PC unless something crazy is going on.

That being said, dynamic allocation in this case could make your program more flexible since you could size the buffer at run-time based on some algorithm you create.

Disk IO can be pretty slow, but you'll get better performance from a long read than from a series of short ones. Network IO also does better with fewer calls (longer buffers) but the advantage falls off a lot more quickly.

I'd recommend creating a test application that will send a file using an easily adjustable chunk size and then just benchmark it with different values. You should find a point at which you no longer gain much advantage in terms of throughput.

Other than that we'd really need to know more about your program. If it's just a simple file sending program then you may just want to grab a great big buffer and cram the data down the pipe. TCP is pretty good at what it does and will actually adapt the connection for maximum throughput for you. If you're concerned about memory usage or want to trigger some sort of progress notification between sending chunks then finding that 'sweet spot' becomes an advantage.

Share this post


Link to post
Share on other sites
savail    333
Thanks a lot for answer!
Actually I want to add a possibility for players to upload their own mods. I guess that files shouldn't exceed 3-5 Mb and I'm sending images, music files and txts so I guess this method you described will work fine for me^^ Edited by savail

Share this post


Link to post
Share on other sites
oliii    2196
Usually the network sub-system has a memory limit. After which, attempting to add data will fail.

for example, int send(const void* data, int datalen);

may return the number of bytes actually sent (usually if < 0, it's an error).

So trying to send 20 bytes, might return only 5, as the cache becomes full up.

So you could implement a simple queueing system, to stream your file data through a TCP socket.


[source lang="cpp"]
struct TCPFileSend
{
TCPFile(Connection* connection, File* file)
: m_connection(connection)
, m_file(file)
{}

~TCPFile()
{}

bool update()
{
// we've finished transmitting the file data.
if(m_file->eof() && !m_connection->sending())
return false;

// fill up the network buffer with file data.
while(transmit_packet());

return true;
}

bool transmit_packet()
{
char data[1024];
int fileptr = m_file->seek_read(); // curent position in the file.
int datalen = m_file->read(data, sizeof(data)); // read a bunch of data, to a maximum limit.
int transmitted = m_connection->send(data, datalen); // send the file packet. check how many bytes were actually accepted by the network layer.
int failed = (datalen - transmitted); // some bytes of the packet were not accepted by the netowrk layer, because running out of memory.

// sanity check. do proper error handling.
ASSERT(fileptr >= 0);
ASSERT(datalen >= 0);
ASSERT(transmitted >= 0);

// didn't managed to transmit all the data.
if(failed > 0)
{
m_file->seek_read(fileptr + transmitted); // roll back file pointer.
return false; // no point continuing further this frame.
}
// we've sent the full packet.
return true;
}

Connection* const m_connection;
File* const m_file;
};[/source] Edited by papalazaru

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this