Network Compression

Started by
8 comments, last by Ludi83 17 years, 8 months ago
Hi, is it worth the time to add network compression? Is it useful or do the lower layers already packet compression? What I basically want to do is to compress the data-chunk with my informations before I send it via WSASendTo(). Greetings
Advertisement
In 'sacred' we used additional compression. In our case it
was worth it (you have to weigh cpu-time over network traffic,
and since clients usually have enough cpu time and bandwith
is expensive, that shifts the balance towards using compression).

Also most people's upload is much smaller than their download,
so it's better to send less data. You could even go for some dynamic
solution where depending on client settings you send data compressed
or not (or maybe the server has a great connection and is so cpu-busy
it doesn't have time for compression and sends everything uncompressed) -
I suggest to just play around with it.

Furthermore compressed packets are a little more difficult to hack

Another idea, instead of compressing single packets, you can collect
packets for x ms (not too long, you don't want to create too much
artificial lag) and then make one 'huge' compressed packet of it.

Regards
visit my website at www.kalmiya.com
Thanks for the reply so far.

Are you one of the Sacred Programmers?

It's a good idea to enable or disable the compression. I like the idea. I think I'll try to implement compression. But what algorithms did you use? Or how did you do the compression? I mean I could download zlib and compress the data-chunks or are there special network compressions available?

And could you tell me some numbers? What was the compression-ratio, e.g.? 10%, 25% or 50%?

Take a look at 'minilzo' on this page. Being just one source file, it's pretty easy to implement in your own project.
http://www.oberhumer.com/opensource/lzo/

The compression ratio will differ depending on the size and type of the data. Very small data ( < 100 bytes or so ) will probably not compress at all, and may even end up larger than you started with. Also, some data is inherently more compressable just because of what it contains. For example, if you are sending a vector location made up of three floating point numbers, the second one of these will compress much more than the first:
(41.2458745, 1.22144574, 0.0124573)
(2.0, 1.0, 0.5)
Looking at the binary representations of them will show that the second set has more bytes in common.
I have not implemented compression yet since it is something that can be added later without much impact on the existing code, but I plan to use compression on packets that are more than x bytes in size - have not figured out what x is yet but from some quick tests I get the feeling it will be somewhere around 600-800 bytes.

>Are you one of the Sacred Programmers?
Yes

>But what algorithms did you use? Or how did you do the compression? I mean I >could download zlib and compress the data-chunks or are there special network >compressions available?
We used zlib, but you can use any algorithm. Do a few profiling tests
what works best in your case (compression ratio vs speed).

>And could you tell me some numbers? What was the compression-ratio, e.g.?
>10%, 25% or 50%?
I'd have to ask the network programmer for specifics but I remember
him saying it was worthwhile (of course, depending on the packets/data,
but in general it was worthwhile)

visit my website at www.kalmiya.com
A very important thing is that you don't have to do it now. Unless of course, you already have bandwidth problems.
You may however quite possibly want to add this compression before the product is finished. So I highly recommend making sure as you go along that it wont be too hard to do later. For example you should probably wrap all send and recieve functions inside wrapper functions. That way you only have two places to change later, and it should be transparent to the rest of the program.

Another idea is to try out some compression algorithm on typical packets of data from your program and see how well it actually compresses. If it only shrinks by 5% then it would probably be a waste of time.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
Okay, asked the network programmer - he says that we have on average
about 50% compression, on highest compression level (client).
We put the compression-level of the gameserver down to a bit because
of cpu-usage (we are running 50+ gameservers per 1 physical server),
so they shouldn't use more than 1-2% cpu max)

visit my website at www.kalmiya.com
I woudl suggest looking into bit packing along with normal compression. It's very simple and very powerful.
One form of compression that's often used (even with lossy protocols) is delta compression, where you only send the data that has changed since the last acknowledged packet from the other end. This isn't compression in the sense of "entropy coding of data" but rather compression in the sense of "send only what's necessary."
enum Bool { True, False, FileNotFound };
Thanks for all your input and ideas, guys and for the "real world" example from kitten.

This topic is closed to new replies.

Advertisement