DXT compression vs downscaled textures

Started by
18 comments, last by Digitalfragment 12 years, 6 months ago
What about loading png/whatever files using D3DX11CreateShaderResourceViewFromFile, while setting format to some of the DXGI_FORMAT_BCx? Can this be speeded up by using an external lib? I'm talking loading textures on the fly, not in pre-process.
Advertisement

Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
Squish, at the moment. Got any recommendations?
Completely agree, but I wont be green light by the execs on providing samples.

Could you demonstrate the problem with suitably licensed images from the net rather than your company's IP?


The other option, which we're currently exploring, is reordering and dissociating channels into different texture fetches in order to get better compression results across the board.<br /><br />Thanks for the points, from the both of you. If nothing else, just having some theorywork confirmed gives us a few things to try out first.<br />[/quote]
Have you looked at alternate encodings like YCoCg compression to see if they are effective with your images/fit in with your software?

What about loading png/whatever files using D3DX11CreateShaderResourceViewFromFile, while setting format to some of the DXGI_FORMAT_BCx? Can this be speeded up by using an external lib? I'm talking loading textures on the fly, not in pre-process.


This will cause the D3DX library to first decode the PNG file and then re-encode as BC*, which won't be very quick. There are libraries/middleware designed specifically for doing runtime DXT compression on the fly...I know Allegorithmics has one that they advertise as being super-fast.

[quote name='MJP' timestamp='1316586639' post='4864122']
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
Squish, at the moment. Got any recommendations?
[/quote]

A while ago we did some comprehensive comparisons at work, and decided that ATI's Compress library produced the best quality (and was also the fastest, since it's multithreaded).

You may find this explanation very interesting for your artists so they know how to arrange the colours. What to do and what to avoid. Note how the first image is close to the original, while in the second one, no compressed colour except 1 pixel actually matches the original (DXT works in 4x4 blocks)

Aweomse link, hadn't read that before. the red-green-blue-grey example is a perfect example to give to the artists.


One final note: What is the memory budget and how much is used? there's little point in using DXT if you have plenty of unused VRAM and the memory bandwidth isn't saturated. Often though, DXT is used preemptively to max out the number of assets that can be included.
Tools like NVPerfHUD (NVIDIA); GPU PerfStudio 2 (ATI) & Intel GPA (Intel, works on other cards too, with less info) will tell you the GPU's memory & bandwidth usage.

As far as little point goes though, theres this:
when you're limited to 256MB ram and a single uncompressed1024 diffuse map costs 4MB (~5.2MB including mipmaps), then you triple that to incorporate other surface data. Thats over 15MB, and we are only talking about a single characters head here, not even counting its body textures, or the vertex data. (And yes, that aren't completely ignores the concept of streaming in high res data etc, it was purely pointing out the cost of data assuming its all in memory)
I'm used to using the console equivelants of those tools. PerfHUD and PerfStudio can't even hold a candle to PIX for 360 :)

[quote name='Hodgman' timestamp='1316593927' post='4864147']
[quote name='MJP' timestamp='1316586639' post='4864122']
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
Squish, at the moment. Got any recommendations?
[/quote]

A while ago we did some comprehensive comparisons at work, and decided that ATI's Compress library produced the best quality (and was also the fastest, since it's multithreaded).
[/quote]
I just downloaded it to hand off to our tech artist to try out :)

You may find this explanation very interesting for your artists so they know how to arrange the colours. What to do and what to avoid. Note how the first image is close to the original, while in the second one, no compressed colour except 1 pixel actually matches the original (DXT works in 4x4 blocks)


On that point again, if DXT1 is an 8:1 compression ratio, it definately seems worthwhile to split all of our alpha channels into their own DXT1 texture, and then not downscale the textures that we need the extra clarity on.
assuming my math is right:
uncompressed source data: 1024x1024 RGBA: 4MB x 3 = 12MB
current compression: 1024x1024 DXT5: 1MB x3 = 3MB
with alphas seperated: 1024x1024 DXT1: 512KB x4 = 2MB

then if we could take the problem texture, upscale it to 2048, and DXT1 compress that = 2MB + 512KBx3 = 3.5MB
only a fraction more expensive memory wise, and an extra texture fetch + register swizzling.

Bilinear upscaling the problem texture should half the frequency, and get a slightly better compression result.
I'm not convinced the 3 alpha values will be correctly preserved in the DXT1 texture. It's a matter of trying and seeing the resulting quality.

Out of curiousity... what's in the other 2 textures? 1 is for diffuse, and the other 2?

For example DXTn is a terrible choice for normal maps (read this paper for more info about compressing normal maps; note it's old and DX10 now supports new compression formats specifically designed for normal maps, IIRC they're not avaialable in the X360 though).

My point is, if your choice is bad at a key texture; it will make the whole model bad. Notice in the paper how choosing DXT1 for bump mapping introduces awful artifacts that can be easily mistaken as artifacts in the diffuse texture.

Note about the paper: it recommends using the CxV8U8 format because it's "hardware accelerated", but current generation hardware dropped support for it, the driver emulates it by converting the texture into Q8W8U8V8 which beats the whole point. If you like that alternative (personal opinion: a very good one), use V8U8 format and calculate the Z component in the shader.


I'm not convinced the 3 alpha values will be correctly preserved in the DXT1 texture. It's a matter of trying and seeing the resulting quality.

Out of curiousity... what's in the other 2 textures? 1 is for diffuse, and the other 2?

For example DXTn is a terrible choice for normal maps (read this paper for more info about compressing normal maps; note it's old and DX10 now supports new compression formats specifically designed for normal maps, IIRC they're not avaialable in the X360 though).

My point is, if your choice is bad at a key texture; it will make the whole model bad. Notice in the paper how choosing DXT1 for bump mapping introduces awful artifacts that can be easily mistaken as artifacts in the diffuse texture.

Note about the paper: it recommends using the CxV8U8 format because it's "hardware accelerated", but current generation hardware dropped support for it, the driver emulates it by converting the texture into Q8W8U8V8 which beats the whole point. If you like that alternative (personal opinion: a very good one), use V8U8 format and calculate the Z component in the shader.



The content changes per material, but are typically any of the following: albedo colour, specular colour, surface roughness, specular exponent, fresnel term, ambient occlusion, lightmaps, pseudo directional self occlusion, microdetail masks, opacity. Normalmaps we don't DXT unless they are background objects where the artifacts aren't noticable. We have some shaders that take 2 normalmaps packed in the 1 texture too.

This topic is closed to new replies.

Advertisement