Jump to content
  • Advertisement
Sign in to follow this  
IvanK

Texture compression / DDS files

This topic is 393 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I would like to show you my tool https://www.Photopea.com . You can use it as a viewer of .DDS files (works even on your phone). It supports BC1, BC2, BC3 (DXT1, DXT3, DXT5) and BC7 (DX10) compressions. I would like you to test it a little, if you have a minute.

Next, I have a philosophical question regarding the texture distribution. I am new to this area.

As I understand it, we want textures to be small "on the wire" (on a DVD / HDD / delivered over the internet). Next, we want them to be small in the GPU memory. I think it is clear, that any non-GPU-ish lossy compression (such as JPG or WebP) can achieve much better quality/size ratio, than any DXTx format (even zipped DXTx). So JPG or WebP is more suiteble for using "on the wire".

I often see developers directly distributing textues in DXTx format (DDS files) "on the wire". The usual excuse is, that decoding JPG and encoding it into DXTx (at the moment of using the texture) would be too time-consuming (while DXTx can be copied to the GPU without any modifications).

I implemented a very naive DXT1 compression into Photopea (File - Export - DDS) and it is surprisingly fast (1 MPx texture takes 80 ms to encode). So I feel like compressing textures (to DXTx) right before sending them to the GPU makes sense. So what is the purpose of the DDS format? Why do developers distribute textures in the DDS "on the wire", when there are better compression methods?

Edited by IvanK

Share this post


Link to post
Share on other sites
Advertisement
2 hours ago, IvanK said:

Why do developers distribute textures in the DDS "on the wire", when there are better compression methods?

Simplicity I guess.

IIRC Rage's megatexture implementation used a highly compressed format that was transcoded to a GPU readable compressed format.

Share this post


Link to post
Share on other sites
2 hours ago, IvanK said:

So what is the purpose of the DDS format?

You compress only a block of 4x4 pixels; a local compression. You can of course do a much better job by considering the full texture as one block and try to compress that at once, which can never result in a worse compression for common textures (of course a single local block can reduce the overall quality of a global compression algorithm and then there are noise texture as well).

But the benefit of local compression is also local decompression which is beneficial for your caches. And since memory bandwidth and latency are really bottlenecks, you'll get the picture ;)

Edited by matt77hias

Share this post


Link to post
Share on other sites

Thanks. I understand BC1 - BC7 compressions, I implemented them all in Photopea (in my own code, everything happens on the CPU).

It is true, that if you want "the best" BCx compression (with the smallest error), it can be very hard to compute (especially in BC7). But in practice, having nearly-best compression is enough (if it was not, you would not use compression at all) and it can be computed in a real time just before sending the texture to the GPU.

So I think that artists should store textures in JPG, and games should load JPGs, parse the raw data, compress into BCx and send to the GPU. BCx compressor can be a tiny library having about 10 kB, there is no need for any huge binaries from nVidia or ATI. Storing textures in JPG is better, because they are always smaller than DDS, while having the same quality. That is why I don't understand, why DDS files are created and distributed.

Share this post


Link to post
Share on other sites

Do the higher BC modes like BC7 render faster in the gpu than the lower equivalent modes like BC3 ?

The documentation seems to imply it, but I haven't timed it.

Share this post


Link to post
Share on other sites
24 minutes ago, IvanK said:

So I think that artists should store textures in JPG

Did you try how much double compression reduces quality on the final texture? I assume it might have issues on normals and roughness because JPG cares for features visible to the human eye, but not for other stuff.

Also, streaming directly from HD without CPU cost is nice to have, but it could be done once during installation.

Share this post


Link to post
Share on other sites
9 minutes ago, JoeJ said:

I assume it might have issues on normals and roughness because JPG cares for features visible to the human eye, but not for other stuff.

I remember i thought about this few years ago. I think the solution would be to modify compression algorithms like jpg in two ways:

1. adapt to usecase of normals, roughness etc.

2. To hide blocky artefacts, interpolate the compression encoding with that of neighouring blocks (similar to how biliniear filtering works). This might reduce quality a bit, but blocky artefacts would disappear at the cost of 4 times slower decompression. 

Not sure if it's worth it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!