Package File Format

Started by
9 comments, last by Norman Barrows 10 years ago

back in the day, they were called a WAD file or resource file. used to do them myself for GAMMA Wing (circa 1995). i did an in-house implementation that supported all the basic file formats used by the company's titles, and included things like on-the fly decompression from the resource file into ram. one unusual feature was separate resource and index files. the index file was opened, it was used to read all resources at program start from the resource file, then both files were closed.

nowadays, i keep everything out in the open for easy modding by fans.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Advertisement

Using zlib or LZMA SDK is pretty common. Use it to compress each individual file, [...] Unless the user has an SSD or RAM-disk, this should actually be a lot faster than loading uncompressed files!

This is very true, but the fact that it is commonly done does not mean it is necessarily correct. My guess is that a lot of people simply use ZLib "because it works", and because it has been used in some well-known virtual filesystem libraries where it has proven to work, too. That was at a time when the goal of compression was slightly different, too (reduce storage space).

I wouldn dearly recommend LZ4/HC over ZLib.

You compress to gain speed, not to save space. This is rather obvious, but it should still be re-iterated, so there is no misunderstanding. Disk space is abundant and cheap, and hardly anyone will notice half a gigabyte more or less, but long load times suck.

You gain speed if, and only if, loading the compressed data and decompressing is faster than just loading raw data. Again, this is actually pretty obvious.

ZLib decompression speed is around 60 MB/s, bzip2 around 15 MB/s, and LZMA around 20 MB/s. Your timings may vary by 1-2 MB/s depending on the data and depending on your CPU, but more or less, it's about that.

My not-very-special 7200RPM seagate disk which I use for backups delivers 100 MB/s (+/- 2 MB/s) for non-small-non-random reads (about ½ that for many small files). Both my OCZ and Samsung SSDs deliver 280 (+/- 0) MB/s no matter what you want to read, they'd probably perform even a bit better if I had them plugged into SATA-600 rather than SATA-300 (which the disks support just fine, only the motherboard doesn't). The unknown SSD in my Windows8 tablet delivers 120-140 MB/s on sequential, and 8-10 MB/s on small files.

My elderly close-to-fail (according to SMART) Samsung 7200RPM disk in my old computer is much worse than the Seagate, but it still delivers 85-90 MB/s on large reads ("large" means requesting anything upwards of 200-300 kB, as opposed to reading 4 kB at a time).

Which means that none of ZLib, bzip2, or LZMA are able to gain anything on any disk that I have (and likely on any disk that any user will have). Most likely, they're indeed serious anti-optimizations which in addition to being slower overall also consume CPU.

Speed-optimized compressors such as LZF, FastLZ, or LZ4 (which also has a slow, high-quality compressor module) can decompress at upwards of 1.5 GB/s. Note that we're talking giga, not mega.

Of course a faster compressor/decompressor will generally compress somewhat worse than the optimum (though not so much, really). LZ4HC is about 10-15% worse in compressed size, compared to ZLib (on large data). However, instead of running slower than the disk, it outruns the disk by a factor of 15, which is very significant.

If decompression is 15 times faster than reading from disk, then you are gaining as soon as compression saves you 1/15 (6.6%).

Also, given that you copmress individual resources or small sub-blocks of the archive (again, for speed, otherwise you would have to sequentially decompress the whole archive from the beginning), your compression rates are very sub-optimal anyway, and you may find that the "best" compressors do not compress much better than the others anyway.

For example, a compressor that does some complicated near-perfect modelling on 5-10 MB of context is simply worth nothing if you compress chunks of 200-400 kB. It doesn't perform significantly better than a compressor that only looks at a 64kB context (a little maybe, but not that much!).

LZ4 compresses pretty poorly in comparison with better compressors (but then again, given the speed at which this happens, it's actually not that bad) but LZ4/HC is very competitive in compression size. Of course, compression is slow, but who cares. You do that offline, once, and never again. Decompression, which is what matters, is the same speed as "normal LZ4" because it is in fact "normal LZ4".

This topic is closed to new replies.

Advertisement