Indeed, this is standard practice these days.
block compress the texture first, offline, then compress that (e.g. with Zlib). This will give around another 50% saving, and the only work to do on loading is decompression, direct to the final format.
Instead of using standard compression on them though, another option is the crunch library, which offers two options --
* an R/D optimised DXT compressor, which reduces quality slightly, but produces files that can be much better compressed by standard compression algorithms.
* it's own compressed format "CRN", which is also a lossy block based format, but has the ability to be directly (and efficiently) transcoded from CRN to DXT, for small on-disk sizes and fast loading.
Fixing things takes time, and time is money... which makes that a question for the business managers, not the engineers
My other question would be I'm sure the programmers know about the slow loading times but why not fix it?
Just say no! Don't perform any image filtering/re-sampling/transcoding or parsing at load-time; move that work to build-time!
or example, in my case (on a PC), loading textures is a good part of the startup time, and a lot of this due to resampling the (generally small) number of non-power-of-2 textures to be power-of-2 sizes. this is then followed by the inner loops for doing the inverse-filtering for PNG files
parsing text files
As phantom mentioned, DXT compression is very slow, so if you want fast texture fetching and low VRAM usage, then you'll also be wasting a lot of load-time recompressing the image data that you just decompressed from PNG too!
The past 3 engines I've used, we've used ZIP-like archives for final builds, and just loose files in the OS's file-system for development builds, because building/editing the huge archive files is slow.
during development, a disadvantage of ZIP though is that it can't be readily accessed by the OS or by "normal" apps
However, the above issue (that your content tools can't write to your archive directly) isn't actually an issue, because even when we're using the OS's file-system, the content tools can't write to those files either, because they've been compiled into runtime-efficient formats!
The data flow looks something like:
[Content Tools] --> Content Source Repository --> [Build tools] --> Data directory --> [Build tools] --> Archive file | | \|/ \|/ In-Development game Retail gameJust how we don't manually compile our code any more -- everyone uses an IDE or at least a makefile -- you should also be using an automated system for building the data that goes into your game. The 3 engine that I mentioned above all used a workflow similar to the diagram, where when an artist saves a new/edited "source" art file, the build system automatically compiles that file and updates the data directory and/or the "ZIP archive".
For example, if someone saves out a NPOT PNG file, the build tools will automatically load, decode, filter, resample that data, then compress it using an expensive DXT compression algorithm, then save it in the platform specific format (e.g. DDS) in the data directory for the game to use. Then at load-time, the game has no work to do, besides streaming in the data.