Jump to content
  • Advertisement
Sign in to follow this  
xerzi

OpenGL DXT decompression

This topic is 2261 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm following what this says (http://www.opengl.org/registry/specs/EXT/texture_compression_s3tc.txt) but it doesn't seem to be working.



struct rgb565
{
short r : 5;
short g : 6;
short b : 5;
bool operator>( const rgb565 & rhs )
{
return *(ushort*)this > *(ushort*)&rhs;
}

};
struct rgba8
{
unsigned char r, g, b, a;
};
struct dxt1_block
{
rgb565 color0, color1;
uint code;
};

void foo( GLenum target, int mipmap, uint w, uint h, void* buffer ){

rgba8 * pixels = new rgba8[ w * h ];
dxt1_block * blocks = (dxt1_block*)buffer;

int x_blocks = w / 4;
int y_blocks = h / 4;

for(int y = 0; y < y_blocks; ++y)
{
for(int x = 0; x < x_blocks; ++x)
{
dxt1_block & b = blocks[ y * x_blocks + x ];

for(int i = 0; i < 4; ++i)
{
for(int j = 0; j < 4; ++j)
{

int p_index = (w * (y + i)) + (x + j);
uint type = b.code & 3;

rgba8 & p = pixels[p_index];

switch(type)
{
case 0:
p.r = b.color0.r;
p.g = b.color0.g;
p.b = b.color0.b;
p.a = 255;
break;
case 1:
p.r = b.color1.r;
p.g = b.color1.g;
p.b = b.color1.b;
p.a = 255;
break;
case 2:
if(b.color0 > b.color1)
{
p.r = (2 * b.color0.r + b.color1.r) / 3;
p.g = (2 * b.color0.g + b.color1.g) / 3;
p.b = (2 * b.color0.b + b.color1.b) / 3;
}
else {
p.r = (b.color0.r + b.color1.r) / 2;
p.g = (b.color0.g + b.color1.g) / 2;
p.b = (b.color0.b + b.color1.b) / 2;
}

p.a = 255;
break;
case 3:
if(b.color0 > b.color1)
{
p.r = (b.color0.r + 2 * b.color1.r) / 3;
p.g = (b.color0.g + 2 * b.color1.g) / 3;
p.b = (b.color0.b + 2 * b.color1.b) / 3;
p.a = 255;
}
else {
p.r = 0;
p.b = 0;
p.g = 0;
p.a = 0;
}
break;
}
b.code >>= 2;
}
}
}
}
glTexImage2D(target, mipmap, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
delete[] pixels;
}


it seems to be a problem with how i'm calculating where the pixels go as there is a problem if the image is 2x2 or 1x1 nothing is written to the pixels buffer.

I can't seem to find much on dxt decompression, i know there are some libraries (libsquish) the way they decompress is nothing like how the article describes it.

Thanks for any help. Edited by xerzi

Share this post


Link to post
Share on other sites
Advertisement
DXT works for 4x4 image blocks. When you have images with other resolutions you have to ignore the pixels outside the valid area. The problem with the code that it computes the workload by dividing width and height by 4, and so cuts off the pixels in the extra "unfilled" 4x4 blocks. In the 2x2 and 1x1 cases there are no pixels left to process.
Most people only use textures with "sensible" resolutions, and so most end up not implementing the case where the texture resolution isn't a multiple of 4.
The details are mentioned in the "[color=#000000]Implementation Note" of the spec.

Share this post


Link to post
Share on other sites
I have a very thorough description of a routine that compresses DXT images with lots of source code here:
http://lspiroengine.com/?p=312

Seeing how they are put together should help understanding how to take them apart.

A few notes:

  1. Blocks are never smaller than 4×4 in memory (or on disk). If the image is 6×5 then you have 2 blocks by 2 blocks, with the upper-left using 4×4 pixels, the upper-right using 2×4 pixels (of the full 4×4 chunk that is in memory), the lower-left being 4×1, and the lower-right being 2×1.
  2. With the proper extensions for loading DXT images in OpenGL, you are allowed to pass any size for the image. In DirectX, however, they must be power-of-2 in both dimensions (not even just a multiple of 4, but full power-of-2). Armed with this knowledge, you can decide whether or not you want to add any artificial limitations such as these for the sake of higher compatibility anywhere to your tool chain.
  3. You don’t get hardware-accelerated decompression if you decompress the texture yourself. If this is only for learning or only to support machines that do not have the DXT extensions (I am looking at you, OpenGL ES), carry on. If you have mistakenly understood that you need to decompress them and send them as a regular texture, stop.



L. Spiro

Share this post


Link to post
Share on other sites
The original patent can be read here:
http://www.google.com/patents?id=QEEZAAAAEBAJ
It does not explain how the color bits are interpolated, but there are other patents that do.
nVidia holds a patent on a more complex method and there might be restrictions on using 3/8 and 5/8 instead of 1/3 and 2/3 (which was proposed in the original patent).

But for using 0/3, 1/3, 2/3, and 3/3, I know of no patent issues (not to imply I know all of the patents out there on this subject).


L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!