I felt interested, so got around to throwing together a simplistic compressor for DXT5... (compressed textures).
this performs an additional compression pass over the normal DXTn compression, mostly for same of making the images smaller for sake of storage on disk or similar (where they would be again decompressed before handing them off to the GPU).
this compressor doesn't use any entropy coding, so is basically a plain bytes-for-bytes encoder.
otherwise, it uses an LZ77 variant vaguely analogous to the one in Deflate, albeit block-based rather than byte based, and supporting a potentially much bigger sliding window (currently the same as the image size).
/* DXTn packed images. Each block will be encoded in the form (byte): 0 <block:QWORD> Literal Block (single raw block), Value=0. 1-127 Single byte block index (Value=1-127). 128-191 X Two byte block index (16384 blocks, Value=128-16383). 192-223 XX Three byte block index (2097152 blocks). 224-238 I LZ/RLE Run (2-16 blocks, Index) 239 LI LZ/RLE Run (Length, Index) 240 XXX 24-Bit Index 241 XXXX 32-Bit Index 242-246 Literal Blocks (2-6 raw blocks) 247 L Literal Blocks (L raw blocks) 248-255 Reserved The block index will indicate how many blocks backwards to look for a matching block (1 will repeat the prior block). Length/Index values will use the same organization as above, only limited to encoding numeric values. Note that DXT5 images will be split into 2 block-planes, with the first encoding the alpha component, followed by the plane encoding the RGB components. */
it is generally spitting out smaller output images than their JPEG analogues.
this much was unexpected...
for example, a 512x512 image is compressing down to roughly 30kB, and a 64x64 image to slightly under 2kB.
the JPEG versions of each are 85kB and 3.5kB.
the PNG versions of each are 75kB and 7kB.
the raw DXT5 versions of each are 256kB and 4kB.
so, this much is seeming "interesting" at least.
compression is a little worse with UVAY textures (YUVA colorspace with Y in the alpha channel, UV in RG, and a mixed alpha and UV scale in B), but this is to be expected (results were ~ 39kB and ~ 3.8kB).
currently, the compression is a little slow (due mostly to the logic for searching for runs, which does a raw search). speeding this up is possible via the use of hash-chains and similar, just I am not currently doing so.
decode speeds seem to measure in at approx 1500Mp/s (~ 1.5GB/s). this is *much* faster than my JPEG decoder.
it is yet to be seen if I will use this for much, but it could be considered as a possible alternative to JPEG for certain tasks (probably with a header thrown on and put into a TLV container).
Edited by cr88192, 20 February 2013 - 01:55 AM.