Quote:The compression algorithm would be applied per record
Unless your records are of a substantial size, you'll inflate your file trying to compress each record seperately. Compression works by removing repetition. If there's almost no repetition within a given sample of data, almost no gain can be made from compressing it. If your records are small, the sample will be too small to compress very much. You'll be working with a simple compression scheme too, which unlike composite formats like zip, won't default to uncompressed data when there's no bonuses to be made. Instead, it'll inflate your record size. A worst case for LZ77 compression for example would inflate your data by 12.5%.
In order to get substantial benefits from compression, you would need to compress sets of records together. This would require increased processing power to decompress/compress this larger data block, and also increased memory to store the decompressed data. I think you'd probably find compression will prove more of a hinderence than a help. It depends on a lot of details we don't have however.
-How many records do you expect to have?
-What is the nature of the data within them (eg, mostly strings, lots of integers with low numbers)
-How big do you expect them to be, in bytes?
-How large are the tables?
-What do you expect the overall filesize of the database to be?
-What are the specs of these units in terms of storage space, RAM, and processing power?
-Will users need to modify records (I'm guessing yes) or simply view them?
At any rate, if your records are going to be less than 80 bytes, I'd say you'll need to compress sets of records, perhaps entire tables, together in one block. This will increase processing power required to access and modify these records significantly. If storage space and CPU usage are both equally at a premium, I simply don't think your units are powerful enough.