Resources / code for manually decompressing DXTn file formats?

Started by
2 comments, last by OrangyTang 18 years, 1 month ago
I'm looking into using DXTn compression and ideally I'd distribute all textures as DXTn files and use them directly. If GL_EXT_texture_compression_s3tc is supported then everything is great, I can load the files in almost direct and get nice fast loading. But if this isn't available then I need a fallback path - so I'm going to have to manually decompress the textures into a regular RGBA texture and use that. Unfortunatly I can't find any resources on doing the decompression manually, has anyone got any (working code would be nice, but a detailed description of the steps involved would be good too). Also, am I going to have any endian issues with using these files with/without ext_texture_compression available? Cheers
Advertisement

Practically any modern hardware supports compressed textures. If it doesn't, it isn't made for gaming purposes :) Though I am not saying that you don't need fallback method. DirectX SDK has information about the layout of the DXT formats. With that information is it possible to figure out the decompression routine.
I agree with Demus79 : nearly all modern hardware will support DXTn compression so I'd suggest that this really is a pointless exercise.

With that being said, DXT compression isn't all that complicated, and so if you really insist on having a backup method it really isn't all that hard.

The basic premise of DXTn encoding is to break up an image into 4x4 pixel tiles which are then individually quantised into either 3 or 4 colours. In this way each texel is limited to a choice of up to four colours and so can be represented as a meagre 2 bit index into a 3 or 4 colour palette.

One of the cleverer aspects of the algorithm is that DXT encoding bases its quantisation on linear gradients and so these 3 or 4 colour palettes can be represented with just two values (the gradient end-points if you like).

Because of this, decompressing colour information from any of the DXT formats turns out to be pretty straightforward:

For each 4x4 block:
* Using the two explicit colours given linearly interpolate a 3 or 4 colour palette.
* Use a series of 2 bit values to index this palette, determining each texel’s colour.

However (and as often is the case) the devil is in the details.

You'll probably be aware that there are multiple DXT formats variants: DXT1, 2, 3, 4 & 5. These variants describe the way in which alpha is handled, and each has its own subtle quirks.


DXT1: Optional 1 bit alpha.

DXT1 is an awkward little so-and-so as it has that pesky "optional" in there.
Under DXT1 colour and alpha information are stored together in the one 64 bit block:

[64 bit colour block]

colour0 [16 bit RGB565]
colour1 [16 bit RGB565]
t00 [2 bit]
t01 [2 bit]
...
t33 [2 bit]

The tricky bit is the way in which DXT1 WITH alpha, and DXT1 WITHOUT alpha are distinguished. Essentially the numerically greater 16 bit colour value (colour0 or colour1) determines whether alpha is to be used or not. If colour0 > colour 1, then no alpha is assumed and a 4 colour palette is used in decompression. If on the other hand colour1 <= colour 0, 1 bit alpha is assumed and only a three colour palette is used ( with the 4th palette index being reserved for transparency). This is great fun when you deal with the encoding but shouldn't pose much hassle for you, as all you have to-do is test for, and deal with each case separately.


DXT2/3: Explicit 4 bit alpha. (DXT2 is the pre-multiplied alpha variant of DXT3)

With DXT2 and upwards you essentially have two chunks of data for every 16 pixel tile - an 8 byte block of alpha data, followed by an 8 byte block of colour data.

In the case of DXT2/3 the alpha block is pretty trivial - each texel has a 4 bit alpha component that is used "as-is". The final encoding for a 16 pixel block under DXT2/3 is thus:

[64 bit alpha block]

t00 [4 bits]
t01 [4 bits]
...
t33 [4 bits]

[64 bit colour block]

Same as DXT1

The colour block is decoded exactly the same as a DXT1 colour block – WITHOUT alpha. This is pretty obvious, as you have the alpha data separately.


DXT4/5: Interpolated alpha

DXT4/5 encodes the alpha in a similar manner to how all of the formats handle colour. Like DXT2/3 the alpha block is again 64 bits in size but is comprised differently, this time of two 8 bit alpha values and a series of 16 3 bit weights:

[64 bit alpha block]

alpha0 [8 bit]
alpha1 [8 bit]
t00 [3 bit]
t01 [3 bit]
...
t33 [3 bit]

[64 bit colour block]

Same as DXT1

To add even more fun to the scheme, this block is again interpreted differently depending on which of the two 8 bit alpha values is numerically greater.

If alpha0 > alpha1 linearly interpolate an 8 value "alpha palette" using the two supplied 8 bit values values. If, on the other hand, alpha1 >= alpha0 then interpolate only 4 intermittent values and pad the resulting palette with 0 and 255 to get your final 8 values. In any case, each 3 bit value is simply an index into the resulting palette, and becomes the alpha for that texel.

Again, colour information is decoded in exactly the same manner as DXT1 WITHOUT alpha.



Wow, that was a pretty long post. I can’t guarantee that I haven’t made some stupid error in the above, and if so please someone point it out. Either way, you shouldn’t really need much more to go on than that, and as Demus79 mentioned, there is nothing above that you can't get out of the DX SDK (Or after a goggle for DXT/S3TC texture compression specifications). Just follow the spec, and it’ll all pan out soon enough.

Cheers,
Ben
Thanks everyone, it all makes more sense now. :) I've also managed to find the DDS loading part in the devIL source code which looks like a good starting point for finding out some of the finer details.

This topic is closed to new replies.

Advertisement