Sign in to follow this  
_under7_

Question about encoding data to DXT compressed DDS

Recommended Posts

Hi everyone! I'm working on a little tool that supposed to convert DDS files to different formats. for ex. 32bit argb to dxt1 or vice versa. ddses are pretty simple things in terms of format, unless you get to encoding DXT compression, that's where i'm stuck. Any ideas on how to do that? it seems that DXT compression is some kind of pallet building algorithm, where in the end you get 2 colors per 16 texels block. i've got no idea, on how to pick those colors, can anybody help?... P.S. ok, let's take the simpliest case - DTX1. plain opaque texture I suspect, the outline for the encoding algorithm goes like that: 1) Build pallete for each 4x4 texels block (colors must be 5-6-5 format) 2) Using some kind of analysis, fill each texel block with one of 4 values: 00, 01, 10 or 11, where 00 is first color for the block and 11 - is the second one. 01 and 10 - are derived from 00 and 11 using interpolaton... ahem.

Share this post


Link to post
Share on other sites
This is an error-minimization problem. You need to come up with some metric for error given the 16 original colors and your two endpoint colors. Sum of squared differences is probably not a bad choice. Then you need to figure out a way of picking the two endpoint colors as to minimize that error. This is not as easy as it looks, because the error is not continuous as you move the two endpoints. It jumps as texels move from being closer to one discrete interpolated value to the next.

Its an fun problem to solve.. enjoy!

xyzzy

Share this post


Link to post
Share on other sites
Quote:
Original post by DBX
The DXT formats are fully described in the DX SDK.


You call 4 tiny pages on DDS format and 4 more on compressed textures -- "fully described?" have you ever read them? i really doubt it.

Share this post


Link to post
Share on other sites
Quote:
Original post by xyzzy00
This is an error-minimization problem. You need to come up with some metric for error given the 16 original colors and your two endpoint colors. Sum of squared differences is probably not a bad choice. Then you need to figure out a way of picking the two endpoint colors as to minimize that error. This is not as easy as it looks, because the error is not continuous as you move the two endpoints. It jumps as texels move from being closer to one discrete interpolated value to the next.

Its an fun problem to solve.. enjoy!

xyzzy


Having fun already! :)
Thanks for getting me started, i had really no idea where to begin. Anyway. The problem i described is just a part of a bigger thing. I've got texture manager here that supposed to load jpeg, png or bmp textures, convert them to raw data, resize depenging on current chosen screen resolution and then convert the result to compressed dds. Seems crazy, huh? The reason it's done this way is that textures we use are in huge resolutions, 2048x1536 for ex. I need to minimize disk usage( that why textures are originally jpegs ) as well as sys and vcard memory(dxt compr. comes in handy). So, basically, in case user chooses 640x480, i'd resize and recompress all the textures, sparing memory and disk space( why keep several different versions of textures on hdd? ). All of this must be done in runtime during textures preload routine, so it makes the problem even more fun.

aaanyway, thanks a lot for your help! :) gotta get back to work

PS
in case any interesting thought about all above comes to your head, please tell me! %)

[Edited by - _under7_ on March 30, 2005 1:09:13 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by _under7_
Quote:
Original post by DBX
The DXT formats are fully described in the DX SDK.


You call 4 tiny pages on DDS format and 4 more on compressed textures -- "fully described?" have you ever read them? i really doubt it.


Yes, and yes. I found it extremely helpful, although it was a few years ago (DX8) when I was working with DXT tools. They may have changed the documentation since (I haven't looked at it recently), but I'm pretty certain the DX8 docs had everything necessary.

Share this post


Link to post
Share on other sites
Quote:
Original post by _under7_
Having fun already! :)
Thanks for getting me started, i had really no idea where to begin. Anyway. The problem i described is just a part of a bigger thing. I've got texture manager here that supposed to load jpeg, png or bmp textures, convert them to raw data, resize depenging on current chosen screen resolution and then convert the result to compressed dds. Seems crazy, huh? The reason it's done this way is that textures we use are in huge resolutions, 2048x1536 for ex. I need to minimize disk usage( that why textures are originally jpegs ) as well as sys and vcard memory(dxt compr. comes in handy). So, basically, in case user chooses 640x480, i'd resize and recompress all the textures, sparing memory and disk space( why keep several different versions of textures on hdd? ). All of this must be done in runtime during textures preload routine, so it makes the problem even more fun.


You do realize that D3DXCreateTextureFromFileEx can do ALL of this for you already, right? It can scale and DXT-compress your textures as you load them. Just specify the dimensions and format you want.

xyzzy

Share this post


Link to post
Share on other sites
Quote:
You do realize that D3DXCreateTextureFromFileEx can do ALL of this for you already, right? It can scale and DXT-compress your textures as you load them. Just specify the dimensions and format you want.
xyzzy


Here's the funny part, afaik, d3d scales textures to the power of 2 (in case your hardware dosn't support non pow 2 textures ). And i don't need that...

Share this post


Link to post
Share on other sites
Quote:
Original post by _under7_
Quote:
You do realize that D3DXCreateTextureFromFileEx can do ALL of this for you already, right? It can scale and DXT-compress your textures as you load them. Just specify the dimensions and format you want.
xyzzy


Here's the funny part, afaik, d3d scales textures to the power of 2 (in case your hardware dosn't support non pow 2 textures ). And i don't need that...

There's a new flag you can pass that defeats that. I can't remember it off the top of my head, but it's in the docs.

Share this post


Link to post
Share on other sites
Or you could create your texture to be whatever size you want manually, and then call D3DXLoadSurfaceFromFile. That would guarantee that your texture is exactly the size you want.

xyzzy

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this