• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Digitalfragment

DXT compression vs downscaled textures

19 posts in this topic

Has anyone here done performance analysis measuring the difference between using DXT3-5 textures vs textures that are half sized? Both yield a 4:1 compression ratio, yet the quality loss of a DXT5 texture under most compression algorithms is consistently worse than a simple downscale.

My understanding of hardware dxt decompression is that the cost of decompression is next to free, but not necessarily cheaper than a half-sized but uncompressed texture fetch given that both textures end up being swizzled/tiled (terminology depending on your platform) and of the same size in the cache. Am I completely wrong in my assumption on a hardware level, or are there fringe cases where this is either true or false (such as dealing with anisotropic textures, textures of a certain size, so on so forth)

Hopefully some of the hardware gurus on here can shed some light.
1

Share this post


Link to post
Share on other sites
I would generally disagree with your statement that quality loss is worse using DXT than a simple downscale. Granted this image is using DXT1 (it forgoes the alpha channel for even higher compression), but the results with DXT3 and 5 are similar if you use them in the correct context (5 for smooth alpha gradients, 3 for sharp alpha transitions).

[img]http://cdn.wolfire.com/blog/catcompare2.jpg[/img]

[url="http://blog.wolfire.com/2009/01/dxtc-texture-compression/"]Source[/url] (with more pictures of DT3/5 and the original uncompressed 768K source image)

As for performance difference, there isn't generally isn't any if the textures are the same size (in bytes). Since the decompress is done in hardware on fetch, I believe you are correct that they are stored equivalently in cache. However, using compression allows you to get much better performance for the perceived quality. A 2048x2048 texture with compression artifacts you will only really see if you are looking for them is usually much nicer than a 1024x1024 downscaled and blurrier texture where the higher quality details have been lost completely. Both take the same amount of ram and memory bandwidth to read, but the perceived quality increase of having 4 times as many pixels is significant. The average user probably won't be able to tell if you are compressing their textures or not, but most will be able to see a halving of texel resolution quite easily.
0

Share this post


Link to post
Share on other sites
Indeed, a DXT1 2048x2048 equals a 512x512 uncompressed. And no, DXT5 isn't superior to DXT1, it's just DXT1 with alpha channel stored interpolated.
There is a colour loss, but isn't generally much worse than regular JPEG artifacts (tip: don't ever, ever, EVER compress to Jpeg then to DXT1, always work your originals in lossless formats before going DXTn).

Specially on large objects (i.e. terrain textures) any loss on colour is way overcompensated by sharpness, lack of aliasing (due to giant-scaling the texture in the model), and detail conservation.

DXTn isn't suited for all kinds of applications, so you have to be well aware of how it works, and have some practical experience.
For instance, if your original image is 128x128 it's size in VRAM will be 64kb (nothing compared to 256/512/1024 MB current GPU have) and then all you have to lose there is colour (at such resolution, detail and sharpness won't be noticeable even in the original) therefore it will look "just ok" in DXT1 where you could easily use the uncompressed image as is, without downsampling.

[b]Edit:[/b] Another example: If you're doing particles, it's usually better to use a grayscaled texture then colour manipulating the emissive compoment. Therefore, you can use L8 textures getting a 4:1 lossless "compression" ratio (A8L8 for alpha, you get 2:1). Almost no need to use DXTn; it may actually yield higher compression ratios, but the quality difference starts becoming too tempting.
A few particle FXs may need coloured textures though. You'll see, you have to be smart, know your texture formats, and decide on a per case basis.
1

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1316572147' post='4864058']
Indeed, a DXT1 2048x2048 equals a 512x512 uncompressed.
[/quote]

I'm fairly certain thats *not* the case, as that would make it 16:1 compression, not 4:1 compression as is documented everywhere.

I probably should have mentioned that our source art before compression is already at a higher resolution, and we are accustomed to appropriately creating individual mip levels to retain clarity. The problem with the quality reduction is that a lot of our details entirely inside or straddling dxt pixel blocks - for all intents and purposes, similar to stippling patterns.
0

Share this post


Link to post
Share on other sites
[quote name='Digitalfragment' timestamp='1316575821' post='4864078']
I'm fairly certain thats *not* the case, as that would make it 16:1 compression, not 4:1 compression as is documented everywhere.
[/quote]

It's actually 6:1 (for a 24-bit source) or 8:1 (for a 32-bit source), not 16:1 or 4:1 (DXT3 and 5 are 4:1). So no, it's not as good as 2048->512, but it's still pretty awesome.

[quote name='Digitalfragment' timestamp='1316575821' post='4864078']
I probably should have mentioned that our source art before compression is already at a higher resolution, and we are accustomed to appropriately creating individual mip levels to retain clarity. The problem with the quality reduction is that a lot of our details entirely inside or straddling dxt pixel blocks - for all intents and purposes, similar to stippling patterns.
[/quote]

Out of curiosity, does your art use a lot of solid color fills or gradients (such as 2D cell shading)? This is an area where texture compression artifacts can actually be noticed fairly easily, and it's common to just bite the bullet on memory and use full res uncompressed textures if these textures are relatively static and viewport aligned.

A screenshot could be worth 1000 words here, so if you have some images of the problems you're encountering with texture compression we might be able to suggest solutions that don't require you to use 4-8 times more VRAM with uncompressed assets.
0

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1316577441' post='4864085']
My apologies though, as you're right 512x512 vs 2048x2048 is indeed 16:1; done my math wrong. There's no power of two downscaled resolution that can mantain the same aspect ratio, so let's say 512x1024 for explanatory purposes; but it's probably not a good idea to downscale in such way in a production environment.
[/quote]
Np, i just wanted to make sure i missing something.

[quote name='Matias Goldberg' timestamp='1316577441' post='4864085']
[quote name='Digitalfragment' timestamp='1316575821' post='4864078']I probably should have mentioned that our source art before compression is already at a higher resolution[/quote]
I'm a bit confused.
"Source material -> downscale -> DXTn" vs "Source -> downscale -> store uncompressed"? right?
[/quote]
An automated process currently goes
source -> downscale -> DXTn

the topic here is debating the option now instead as:
source -> DXTn
vs
source -> downscale


[quote name='Matias Goldberg' timestamp='1316577441' post='4864085']
[quote name='Digitalfragment' timestamp='1316575821' post='4864078']we are accustomed to appropriately creating individual mip levels to retain clarity.[/quote]
The DDS format allows using custom mips; however if you use DXTn they'll be subject to compression too. I'm not sure if your problem lies in that you think you're forced to automatic mip map generation by using dxtn, or you don't like the artifacts caused on your custom mips (which is totally understandable, like I said, DXTn becomes weaker at lower resolutions).
-snip-
[/quote]
The latter. We do use custom mips within dds files, (assembled multiple uncompressed files, etc), which after compression loses the highgrain frequency that the artists are expecting to keep.

[quote name='Matias Goldberg' timestamp='1316577441' post='4864085']
You may have valid concerns that the mipmaps may not look the way you want in your artwork, but you should be asking yourself whether those artifacts in the mipmaps are actually going to be visible/noticeable once displayed in the 3D render (again, depends on the case).
[/quote]
The resulting quality was already flagged as needing attention by our artists looking at the final product on the hardware. IMHO, the current situation acceptable loss of quality and not worth the extra memory of either solution, but in the end its also not my call.

[quote name='kuroioranda' timestamp='1316577890' post='4864088']
Out of curiosity, does your art use a lot of solid color fills or gradients (such as 2D cell shading)? This is an area where texture compression artifacts can actually be noticed fairly easily, and it's common to just bite the bullet on memory and use full res uncompressed textures if these textures are relatively static and viewport aligned.
[/quote]
Far from. I already stated that the issue is closer to that of a stippling pattern (adjacent pixels have vastly different contrasts). The results of the various dxt compressors we have tried have in all cases 'smudged' the contrasted pixels out, or messing up the other areas. Worse comes to worst, we'll just programattical pick out dxt-blocks based on their correctness of multiple compressors and stitch them together. Though, thats gonna be a painfully slow automated process! ;)

FWIW: Gradients are an easy enough case to support DXT compression - normalise your channels and upload the white/black values in your shader to denormalize them for max precision.

[quote name='kuroioranda' timestamp='1316577890' post='4864088']
A screenshot could be worth 1000 words here, so if you have some images of the problems you're encountering with texture compression we might be able to suggest solutions that don't require you to use 4-8 times more VRAM with uncompressed assets.
[/quote]
Completely agree, but I wont be green light by the execs on providing samples.

The other option, which we're currently exploring, is reordering and dissociating channels into different texture fetches in order to get better compression results across the board.

Thanks for the points, from the both of you. If nothing else, just having some theorywork confirmed gives us a few things to try out first.
0

Share this post


Link to post
Share on other sites
[quote name='kuroioranda' timestamp='1316577890' post='4864088']
It's actually 6:1 (for a 24-bit source) or 8:1 (for a 32-bit source), not 16:1 or 4:1 (DXT3 and 5 are 4:1). So no, it's not as good as 2048->512, but it's still pretty awesome.
[/quote]
For a 24-bit source it's still 8:1 since no GPU in the market stores a 24-bit image in 3 bytes. They waste one more byte for alignment purposes. I don't know if this holds true for handheld GPUs. However, the image stored in regular RAM (either for caching or in managed DX) is probably 6:1; when it is kept.

When stored in HDD, it is a 6:1 ratio but it's hard to compare as an uncompressed file would be probably stored as PNG (or TGA with RLE compression, or lossy Jpeg) where the compressed size may vary drastically.
0

Share this post


Link to post
Share on other sites
[quote name='Digitalfragment' timestamp='1316582114' post='4864107']
[quote name='kuroioranda' timestamp='1316577890' post='4864088']
Out of curiosity, does your art use a lot of solid color fills or gradients (such as 2D cell shading)? This is an area where texture compression artifacts can actually be noticed fairly easily, and it's common to just bite the bullet on memory and use full res uncompressed textures if these textures are relatively static and viewport aligned.
[/quote]
Far from. I already stated that the issue is closer to that of a stippling pattern (adjacent pixels have vastly different contrasts). The results of the various dxt compressors we have tried have in all cases 'smudged' the contrasted pixels out, or messing up the other areas. Worse comes to worst, we'll just programattical pick out dxt-blocks based on their correctness of multiple compressors and stitch them together. Though, thats gonna be a painfully slow automated process! ;)

FWIW: Gradients are an easy enough case to support DXT compression - normalise your channels and upload the white/black values in your shader to denormalize them for max precision.[/quote]
Sounds like the source is high frequency data. This means you have pixels very different one from another. DXT isn't very good at it.
Just like PNG vs Jpeg. Jpeg sucks when compressing computer-made graphs (i.e. MS Excel graphs) text, etc.

You may find this explanation very interesting [url="http://www.fsdeveloper.com/wiki/index.php?title=DXT_compression_explained"]for your artists[/url] so they know how to arrange the colours. What to do and what to avoid. Note how the first image is close to the original, while in the second one, no compressed colour except 1 pixel actually matches the original (DXT works in 4x4 blocks)

[u]One final note:[/u] What is the memory budget and how much is used? there's little point in using DXT if you have plenty of unused VRAM and the memory bandwidth isn't saturated. Often though, DXT is used preemptively to max out the number of assets that can be included.
Tools like NVPerfHUD (NVIDIA); GPU PerfStudio 2 (ATI) & Intel GPA (Intel, works on other cards too, with less info) will tell you the GPU's memory & bandwidth usage.

[b]Edit:[/b] GIMP's dds plugin comes with 3 different methods to select colours for dxt interpolation, plus a "dithering" option. That's 6 combinations in total. You may want to give them a try.

Good luck on your problem
Dark Sylinc
1

Share this post


Link to post
Share on other sites
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
1

Share this post


Link to post
Share on other sites
What about loading png/whatever files using D3DX11CreateShaderResourceViewFromFile, while setting format to some of the DXGI_FORMAT_BCx? Can this be speeded up by using an external lib? I'm talking loading textures on the fly, not in pre-process.
0

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1316586639' post='4864122']
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
[/quote][url="http://www.sjbrown.co.uk/squish/"]Squish[/url], at the moment. Got any recommendations?
0

Share this post


Link to post
Share on other sites
[quote name='Digitalfragment' timestamp='1316582114' post='4864107']Completely agree, but I wont be green light by the execs on providing samples.[/quote]
Could you demonstrate the problem with suitably licensed images from the net rather than your company's IP?


[quote]The other option, which we're currently exploring, is reordering and dissociating channels into different texture fetches in order to get better compression results across the board.<br /><br />Thanks for the points, from the both of you. If nothing else, just having some theorywork confirmed gives us a few things to try out first.<br />[/quote]
Have you looked at alternate encodings like [url=http://developer.download.nvidia.com/whitepapers/2007/Real-Time-YCoCg-DXT-Compression/Real-Time%20YCoCg-DXT%20Compression.pdf]YCoCg compression[/url] to see if they are effective with your images/fit in with your software?
0

Share this post


Link to post
Share on other sites
[quote name='pcmaster' timestamp='1316589870' post='4864138']
What about loading png/whatever files using D3DX11CreateShaderResourceViewFromFile, while setting format to some of the DXGI_FORMAT_BCx? Can this be speeded up by using an external lib? I'm talking loading textures on the fly, not in pre-process.
[/quote]

This will cause the D3DX library to first decode the PNG file and then re-encode as BC*, which won't be very quick. There are libraries/middleware designed specifically for doing runtime DXT compression on the fly...I know Allegorithmics has one that they advertise as being super-fast.
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1316593927' post='4864147']
[quote name='MJP' timestamp='1316586639' post='4864122']
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
[/quote][url="http://www.sjbrown.co.uk/squish/"]Squish[/url], at the moment. Got any recommendations?
[/quote]

A while ago we did some comprehensive comparisons at work, and decided that ATI's Compress library produced the best quality (and was also the fastest, since it's multithreaded).
0

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1316583108' post='4864111']
You may find this explanation very interesting [url="http://www.fsdeveloper.com/wiki/index.php?title=DXT_compression_explained"]for your artists[/url] so they know how to arrange the colours. What to do and what to avoid. Note how the first image is close to the original, while in the second one, no compressed colour except 1 pixel actually matches the original (DXT works in 4x4 blocks)[/quote]
Aweomse link, hadn't read that before. the red-green-blue-grey example is a perfect example to give to the artists.

[quote name='Matias Goldberg' timestamp='1316583108' post='4864111']
[u]One final note:[/u] What is the memory budget and how much is used? there's little point in using DXT if you have plenty of unused VRAM and the memory bandwidth isn't saturated. Often though, DXT is used preemptively to max out the number of assets that can be included.
Tools like NVPerfHUD (NVIDIA); GPU PerfStudio 2 (ATI) & Intel GPA (Intel, works on other cards too, with less info) will tell you the GPU's memory & bandwidth usage.
[/quote]
As far as little point goes though, theres this:
when you're limited to 256MB ram and a single uncompressed1024 diffuse map costs 4MB (~5.2MB including mipmaps), then you triple that to incorporate other surface data. Thats over 15MB, and we are only talking about a single characters head here, not even counting its body textures, or the vertex data. (And yes, that aren't completely ignores the concept of streaming in high res data etc, it was purely pointing out the cost of data assuming its all in memory)
I'm used to using the console equivelants of those tools. PerfHUD and PerfStudio can't even hold a candle to PIX for 360 :)
0

Share this post


Link to post
Share on other sites
[quote name='MJP' timestamp='1316630302' post='4864293']
[quote name='Hodgman' timestamp='1316593927' post='4864147']
[quote name='MJP' timestamp='1316586639' post='4864122']
Which library are you using for DXT compression? They are most definitely not all equal in terms of quality or performance.
[/quote][url="http://www.sjbrown.co.uk/squish/"]Squish[/url], at the moment. Got any recommendations?
[/quote]

A while ago we did some comprehensive comparisons at work, and decided that ATI's Compress library produced the best quality (and was also the fastest, since it's multithreaded).
[/quote]
I just downloaded it to hand off to our tech artist to try out :)
0

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1316583108' post='4864111']
You may find this explanation very interesting [url="http://www.fsdeveloper.com/wiki/index.php?title=DXT_compression_explained"]for your artists[/url] so they know how to arrange the colours. What to do and what to avoid. Note how the first image is close to the original, while in the second one, no compressed colour except 1 pixel actually matches the original (DXT works in 4x4 blocks)
[/quote]

On that point again, if DXT1 is an 8:1 compression ratio, it definately seems worthwhile to split all of our alpha channels into their own DXT1 texture, and then not downscale the textures that we need the extra clarity on.
assuming my math is right:
uncompressed source data: 1024x1024 RGBA: 4MB x 3 = 12MB
current compression: 1024x1024 DXT5: 1MB x3 = 3MB
with alphas seperated: 1024x1024 DXT1: 512KB x4 = 2MB

then if we could take the problem texture, upscale it to 2048, and DXT1 compress that = 2MB + 512KBx3 = 3.5MB
only a fraction more expensive memory wise, and an extra texture fetch + register swizzling.

Bilinear upscaling the problem texture should half the frequency, and get a slightly better compression result.
0

Share this post


Link to post
Share on other sites
I'm not convinced the 3 alpha values will be correctly preserved in the DXT1 texture. It's a matter of trying and seeing the resulting quality.

Out of curiousity... what's in the other 2 textures? 1 is for diffuse, and the other 2?

For example DXTn is a [b][i]terrible[/i][/b] choice for normal maps (read [url="ftp://download.nvidia.com/developer/Papers/2004/Bump_Map_Compression/Bump_Map_Compression.pdf"]this paper[/url] for more info about compressing normal maps; note it's old and DX10 now supports new compression formats specifically designed for normal maps, IIRC they're not avaialable in the X360 though).

My point is, if your choice is bad at a key texture; it will make the whole model bad. Notice in the paper how choosing DXT1 for bump mapping introduces awful artifacts that can be easily mistaken as artifacts in the diffuse texture.

Note about the paper: it recommends using the CxV8U8 format because it's "hardware accelerated", but current generation hardware dropped support for it, the driver emulates it by converting the texture into Q8W8U8V8 which beats the whole point. If you like that alternative (personal opinion: a very good one), use V8U8 format and calculate the Z component in the shader.

0

Share this post


Link to post
Share on other sites
[quote name='Matias Goldberg' timestamp='1316713475' post='4864773']
I'm not convinced the 3 alpha values will be correctly preserved in the DXT1 texture. It's a matter of trying and seeing the resulting quality.

Out of curiousity... what's in the other 2 textures? 1 is for diffuse, and the other 2?

For example DXTn is a [b][i]terrible[/i][/b] choice for normal maps (read [url="ftp://download.nvidia.com/developer/Papers/2004/Bump_Map_Compression/Bump_Map_Compression.pdf"]this paper[/url] for more info about compressing normal maps; note it's old and DX10 now supports new compression formats specifically designed for normal maps, IIRC they're not avaialable in the X360 though).

My point is, if your choice is bad at a key texture; it will make the whole model bad. Notice in the paper how choosing DXT1 for bump mapping introduces awful artifacts that can be easily mistaken as artifacts in the diffuse texture.

Note about the paper: it recommends using the CxV8U8 format because it's "hardware accelerated", but current generation hardware dropped support for it, the driver emulates it by converting the texture into Q8W8U8V8 which beats the whole point. If you like that alternative (personal opinion: a very good one), use V8U8 format and calculate the Z component in the shader.


[/quote]
The content changes per material, but are typically any of the following: albedo colour, specular colour, surface roughness, specular exponent, fresnel term, ambient occlusion, lightmaps, pseudo directional self occlusion, microdetail masks, opacity. Normalmaps we don't DXT unless they are background objects where the artifacts aren't noticable. We have some shaders that take 2 normalmaps packed in the 1 texture too.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0