[Release] DDSExport tool (clusterized DXTn texture compression demo app)

Started by
2 comments, last by Ashaman73 12 years, 3 months ago
I've released a new DXTn texture compression tool (and also a demo of crnlib) that some people here may find useful:

https://sites.google...gel99/ddsexport

Here's the intro:

ddsexport is a 32-bit Windows app that demonstrates some of the capabilities of the open-source DXTn texture compression library crnlib. The .DDS files created by ddsexport are much more compressible by lossless compression algorithms such as LZMA, Deflate (zlib), LZO, etc. compared to files created by other libraries/tools. Assuming your engine already supports some form of lossless asset compression (most do), adding ddsexport to your product's texture export pipeline can reduce the amount of compressed texture data distributed to customers (or placed on disc, etc.) by 10-40% or more.

The basic idea:

This tool allows you to load an image and interactively play around with crnlib's quality factor (like JPEG: 1-100, higher is better quality), compression parameters, and mipmap settings in real time (or near real time, depending on the image dimensions and how many CPU cores you've got). The dialog displays the LZMA compressed size of the .DDS texture, along with a simple measure of quality (luma, rgb, or alpha PSNR). Once you're happy with the settings you can save a .DDS file to disk containing the much more compressible DXTn texture data.

-Rich
Advertisement
I accidentally posted the "https" version of the link. This link seems to work fine, but just in case here's the non-https link:

http://sites.google....gel99/ddsexport

-Rich
What is the intent behind the DXT quality slider, and where's the beef with variable bitrate?

DXT quality is purely visual and solely depends on the work done at compression time (finding the best pair of to-be-interpolated colors), it does not influence the output size in any way, nor the decompression time. The bitrate is exactly defined by the format, and the decompression is exactly the same every time (only with different parameters), it's probably hardwired in silicon on graphics cards.

The only ways of reducing the bitrate that I see would be to either use a higher level of compression when post-compressing e.g. with lzma, or trying to "quantize" the binary output before feeding them to a standard compressor (though not sure how successful that would be).
But either of these is misleading, as it only reduces the on-disk size, not the actual texture's size as seen by your texture manager or the graphics driver.

If your compressor runs so fast that it lets you interactively tweak quality in "realtime" or "nearly realtime", why would anyone ever want to use anything different from maximum quality?

The only ways of reducing the bitrate that I see would be to either use a higher level of compression when post-compressing e.g. with lzma, or trying to "quantize" the binary output before feeding them to a standard compressor (though not sure how successful that would be).
But either of these is misleading, as it only reduces the on-disk size, not the actual texture's size as seen by your texture manager or the graphics driver.


Yes, this tool assumes the raw (4bpp or 8bpp) DXT texture bits will be eventually losslessly post-compressed at some point prior to distribution. DXT texture data is typically very redundant (and available bandwidth/optical disc space/etc. are finite resources), so it makes sense to further compress it. The majority of the game titles I've worked on (or wrote the lossless compression codecs for) do exactly this.

It's effectively a form of rate-distortion optimization, except for DXT texture data instead of video:
http://en.wikipedia....on_optimization

On many real-life game textures, a small (probably visually imperceptible) sacrifice in quality can result in relatively large increases in the compressibility (or entropy reduction) of the DXT data. Putting this another way, offline DXT compressors typically evaluate many different endpoints for a given block. There's nowhere in the DXT spec that says you must choose the endpoints that result in the absolute lowest error (however you measure it). In many cases, there will be a number of endpoints (and sometimes selectors) that result in very similar errors. However, some endpoints/selectors will result in less overall entropy added to the output file than others. If you're really careful the savings compared to non-RDO DXT can be pretty high (I've seen 30-50% on many game textures).

crnlib uses endpoint clusterization+VQ to indirectly reduce the entropy of the output texture bits, because I already had most of this stuff figured out while creating the .CRN format. There are probably lots of (perhaps better) ways of doing this, though.

This paper is a different take to the same problem:
"Lossless Compression of Already Compressed Textures"
http://www.jacobstro...stenHPG2011.pdf

But this approach requires writing a custom (non-standard) lossless compressor and integrating it into your engine. No engine changes are needed when using a DXT compressor with some form of RDO.

Also, your texture manager can keep all textures in memory in a compressed form (say RDO DXT+LZMA). If it only unpacks the mipmap levels needed to render a given frame (the current working set), and places them into a GPU texture cache, then you've just cut down on your application's overall memory footprint. This method is particularly useful on console games that have free CPU cycles for decompression on idle cores.

Hope that helps,
-Rich
Thank you for the explanation.

I've never bothered to further compress DXT textures under the premise that adding significant overhead for maybe another 5-10% gain isn't worth it. Compressing compressed data doesn't give a lot, as everybody knows. Loading a few kilobytes extra when you already load 20MB surely is faster and easier than having to decompress the data. Not worth the hassle. Fullstop.

Well, what can I say. After your explanation, I actually tried and turns out it's more like 50-70% saved. Now of course, reading 20MB or 35MB from disk does make a difference... smile.png
In my opinion the quality of DXT compressed textures is by far more important than disk space. Could you provide some comparision shots of a compressed texture (prefarable with error visualisation) and some other compression tools (amd etc.) ?

This topic is closed to new replies.

Advertisement