Huffman code for network traffic reduction

Started by
8 comments, last by hplus0603 19 years, 5 months ago
hi I have done a huffman tree about some random data in hex decimals the sourse was 14*32 hex decimals which is 1752 bits after huffman encoding use reduced it to 1480 bits, that are around 218->185 bytes now my intention is to reduce traffic to make the game playable on low bandwidth connections now my worries are about the time consumption when generating a huffman codex for every packet since cpu resources might be wasted and the latency might increase what do you think about this kind of compression? are there any other compression algorithms available that might be as good as huffman and probably cost less cpu time?
http://www.8ung.at/basiror/theironcross.html
Advertisement
You cannot compress random data (statistically, one small example may work).

Huffman encoding is not recommended for game packets. They are small enough not to benefit from packing, and if you do use compression, i'd suggest a better algorithm and implementation, such as the zlib.
lol all data is random pf
http://www.8ung.at/basiror/theironcross.html
does your game really need to send more than, say, 5k bytes per second?

I'm not saying it's wrong if it does.
[size="2"]I like the Walrus best.
Quote:Original post by Basiror
lol all data is random pf


The AP's reference to the term "random data" appears to have gone right over your head. With any background in compression you would have know the APs point ahead of time. You might also have known that individual huffman compression (generation of seperate trees) for each packet is not only a waste of CPU time, but of network bandwidth as well. Look into adaptive huffman compression, which is the prefered form of huffman when transmitting seperate but sequenced units over a network. However as the AP stated, your approach is next to useless unless you intend this for compressing user text messages (in which case adaptive is still the best approach). Unless you are doing an absolute horrible job of choosing the data you send, the data is just too random (uniformly distributed between 0x00 and 0xFF) for huffman to do any good, due to the size overhead involved.
I'd check to see if lower bandwidth connections can play your game befor trying to compress stuff - are you sure its needed (premature opimization and evil roots etc). If so then I have no solution to your problem but I know Quake3 uses huffman encoding for its network traffic so your at least on the right track as far as I'm concerned.
ok just save a package of data into a file and open it with a hex editor you will see that there are horrible much 0s in there

i have already implemented delta compression so i do not waste space

usually you get 3-4 kb/s so the average packet size will be between 300-450 bytes

with huffman i get a average compression rate of 17.8% which
makes up 500-700 bytes of saved bandwidth


i ll have a look at adaptive huffman trees

my primary intention is to write a library for different use

either packet compression or maybe filecompression later on

p.s.: strings are already compressed via the delta compression so i don t use 8 bits for a character don t worry
http://www.8ung.at/basiror/theironcross.html
The presence of a lot of zeros means your packets are very badly constructed. This should be fixed at the packet construction phase, not by a general purpose compressor. The most likely cause is that you are using huge data types for small pieces of data (like using a 32bit integer to store a number from 1 to 10). When contructing packets you should really look at taking advantage of all space in the packet (using bit fields to place more then one piece of information in a single byte, i.e. 1 byte to store 8 bools, rather then 8 bytes) before trying general compression.

Also, 3-4KB/s is way way way too much. That will not be playable on dialup (~1KB/s upload), and even light broadband connections will sputter (there are many "light" broadband services, which offer "broadband" for about the price of dialup, usually it is 128kps up, 32kps down). What kind of game is it that you need to send 300-450 bytes per packet. My guess is that all your problems are in the design of your games network structure. You are doing something we call "premature optimization". Because of this you are optimizing the wrong part of your code, while leaving the real performance bottleneck alone. Real network games do not send that much data. That's why packet compression does little to nothing to help them, because the packets are too small (going with plain huffman, the huffman header would make the packet bigger then the original one)
wrong

just have a look at hl or call of duty both use around 2-3.5 bytes

i only reach 3-4k bytes with 64 players and no i don t waste space since i am already using delta compression

just let me try a few things i ll tell you how it works out :)

[Edited by - Basiror on November 14, 2004 11:24:27 AM]
http://www.8ung.at/basiror/theironcross.html
I believe Basiror is talking about rates "per second" and Michalson is talking about rates "per packet".

2-3 kB packets really won't work all that well for many people. 2-3 kB/second, split in 10-20 packets, will work just fine.
enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement