Jump to content
  • Advertisement
Sign in to follow this  
dpadam450

DXT compressors, good vs bad

This topic is 607 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I never really compressed my textures in my engine because I didn't have gigantic levels / a full game worth of texture content.  If I let my GPU or GIMP create DXT textures, they come out about the same which look terrible.

 

I was digging up really old threads about DXT and realized that certain tools are actually way better than others. GIMP vs Nvidia Texture Tools is such a drastic difference. I noticed that the nvidia tools have a -fast compression option as well which is pretty bad but still not as bad as gimp.

 

So what gives? We have 4x4 blocks being compressed. Of those 16 pixels you select a high and low and interpolate everything in between. My guess then is they must be determining actual contrast/averages to throw out some outliers and have a better high and low value for the 16 pixel block? Maybe this contrast check is happening per r,g,b channel?

 

*Edit: I just tried the "use dithering option" in GIMP which produced results much closer to the nvidia tool, but still not as good.

 

Nvidia tool being used:

https://code.google.com/archive/p/nvidia-texture-tools/downloads

Edited by dpadam450

Share this post


Link to post
Share on other sites
Advertisement

Here you have a great article about texture samplers and there is information about DXT too.

 

It says:

 

 

So, small L1 cache, long pipeline. What about the “additional smarts”? Well, there’s compressed texture formats. The ones you see on PC – S3TC aka DXTC aka BC1-3, then BC4 and 5 which were introduced with D3D10 and are just variations on DXT, and finally BC6H and 7 which were introduced with D3D11 – are all block-based methods that encode blocks of 4×4 pixels individually. If you decode them during texture sampling, that means you need to be able to decode up to 4 such blocks (if your 4 bilinear sample points happen to land in the worst-case configuration of straddling 4 blocks) per cycle and get a single pixel from each. That, frankly, just sucks. So instead, the 4×4 blocks are decoded when it’s brought into the L1 cache: in the case of BC3 (aka DXT5), you fetch one 128-bit block from texture L2, and then decode that into 16 pixels in the texture cache. And suddenly, instead of having to partially decode up to 4 blocks per sample, you now only need to decode 1.25/(4*4) = about 0.08 blocks per sample, at least if your texture access patterns are coherent enough to hit the other 15 pixels you decoded alongside the one you actually asked for :). Even if you only end up using part of it before it goes out of L1 again, that’s still a massive improvement. Nor is this technique limited to DXT blocks; you can handle most of the differences between the >50 different texture formats required by D3D11 in your cache fill path, which is hit about a third as often as the actual pixel read path – nice. For example, things like UNORM sRGB textures can be handled by converting the sRGB pixels into a 16-bit integer/channel (or 16-bit float/channel, or even 32-bit float if you want) in the texture cache. Filtering then operates on that, properly, in linear space. Mind that this does end up increasing the footprint of texels in the L1 cache, so you might want to increase L1 texture size; not because you need to cache more texels, but because the texels you cache are fatter. As usual, it’s a trade-off.

Share this post


Link to post
Share on other sites

Our discussion is on the compression stage of generating a DXT texture offline, not the decompression stage. There are varying quality DXT outputs of the same image input.

Share this post


Link to post
Share on other sites

@MJP

I dug up an old thread of yours from years ago I think talking about Compressonator. Brought me to the whole concept of the algorithm behind what these tools are actually doing.

Share this post


Link to post
Share on other sites
DXT compression is not at all trivial, both in how it works, and in what logic people attach to it. Finding the best possible coefficients is a hard problem, but this is usually not necessary. On the other hand, a couple of years ago, a person with name Rich Geldreich (I remember the name because it was so funny, Geldreich means "rich on money") posted a link to a compressor which deliberately compressed sub-optimally. My first thought was "WTF?!" but on a second thought this turned out being a really ingenious idea since the bad compressor's output was such that it compressed very well when fed into a LZ compressor afterwards. So you had only slightly worse quality but a lot less data to transmit over the wire.

Share this post


Link to post
Share on other sites

On the other hand, a couple of years ago, a person with name Rich Geldreich (I remember the name because it was so funny, Geldreich means "rich on money") posted a link to a compressor which deliberately compressed sub-optimally. My first thought was "WTF?!" but on a second thought this turned out being a really ingenious idea since the bad compressor's output was such that it compressed very well when fed into a LZ compressor afterwards. So you had only slightly worse quality but a lot less data to transmit over the wire.

Rich (and Stephanie Hurlburt) now have a startup called Binormal that's continuing this work, and trying to make a universal lossy/lossless hybrid texture format -- A lossless LZ type algorithm over a lossy block based algorithm that can quickly transcode to the native GPU formats such as DXT/ETC/etc.
They've also got that kind of DXT compressor that you talk about, which optimizes the DXT coefficients such that LZMA will compress the results better.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!