Quote:Original post by Yann LQuote:
I would argue a different way: DXT let's you use higher resolution textures with the same file size as a smaller resolution model. That is, if I use DXT5, I can use a 1024x1024 image where a 512x512 image was before, without any additional costs in terms of performance or VRAM.
Not entirely correct. Texture fetch latency and caching behaviour will be (slightly) affected, and due to more mipmap levels you'll inevitably increase memory usage (assuming DXT5).
How so? If the compressed, but larger resolution texture is the same size in memory as an uncompressed, smaller version. Also, I don't think it'll increase memory usage even counting the mip map levels (compared with an uncompressed but smaller texture + its mip maps). Or if it does, it's like 4 bytes for the one extra level :)
Quote:
However, this was not my point. When you design your game and all artwork assets, you define a common lowest denominator for it to work on. Say you target 512MB VRAM class hardware. You will choose your resolutions and compression settings so to avoid (too much) swapping on such a card. Now, imagine some user buying a new 1GB VRAM card. Essentially, since your game was designed with 512MB in mind, half of his memory will be unused and the textures will look the same as on a 512MB card. Mr.NewCardOwner will not be happy.
In your scenario you're doing exactly what I'm saying. You're essentially designing your game with imaginary future tech cards in mind, with large texture sizes, and allowing current lowest denominator cards to run by compressing the textures.
Quote:Quote:
Probably the optimal version would let the artists specify an optional compression format in an options file for every texture.
I wouldn't trust an average artist to handle that [wink]
Really? Is it that you're afraid the artists won't ever choose the compression because they don't want their art to have artifacts?
Quote:
If you think like that, then why not consider JPEG instead ? It's going to be smaller, after all. Of course there's the JPEG->DXTC transcoding, which would suck. But seriously, while the texture loading is HDD limited, it only makes a small percentage of the overall loading time of a scene.
What else are you doing in your scene? The time it takes to dump stuff (textures, models, etc.) from disk to GPU should be the bottleneck. If other stuff is slowing your loading times down, I can't help but feel you're doing something wrong. I can't imagine anything CPU intensive that should be going on during level load (except things like uncompressing from disk to make up for poor hard drive speeds). And probably random properties I guess. If you have 1000 monsters that start with random positions, that might take a while. But most games aren't like that.
Quote:
I just can't understand how people can store their assets in a format that was never meant to be a lossy image storage format. It was designed as a compression with a GPU-friendly decoding process. It is good at doing this for now, but why use it for something where other, better alternatives exist ?
If you're storing textures on disk, you probably want to avoid JPG because it's the worst of both worlds: artifacts and uncompressed in VRAM (or worse, you're DXT compressing the JPG, adding artifacts to artifacts). That leaves lossless, where PNG is king. Or hardware accelerated lossy, where DDS makes the most sense, since it natively supports it. Or roll your own, of course, that could support either.
If/when they invent hardware accelerated JPG and PNG compression, I will totally jump on that bandwagon. But I would still probably be using a DDS format: Microsoft would probably invent some new DDS mode that lets you store as JPG or PNG compression to keep DDS current. Compression aside, DDS is the best general purpose texture format, because it's the only one specifically developed with game hardware in mind.