Jump to content
  • Advertisement
Sign in to follow this  
AGPX

HDR Texture Compression

This topic is 4811 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi to all, I need to introduce HDR textures in my DX9 engine. Actually, IMHO the best texture format for HDR images is A16R16G16B16. The problem is that every texel occupies 8 bytes! An extremely huge quantity of memory. Compression techniques are necessary. I looked at DX9 demos and I see a nice compression method called RGBE8. Basically, it rewrite every channel (r, g, b) in the form: m * 2^e. m represent the mantissa of the channel and e is the exponent (stored in the alpha channel). Exponent is common to all the three channels. This method requires 4 bytes per texel, that is an improvement over the naive one. Actually I can extend this compression up to 1,5 bytes per texel in the following way: I compress RGB channels with DXT1 compression (0,5 bytes per texel). Then, I use another texture to store the exponent uncompressed (this increase the precision because exponent precision is crucial). Total: 1,5 bytes per texel. A quite good compression. However, there is a problem: this method works only if you use point filtering. Other filtering methods, like the bilinearing filter, make the things go wrong. I have tried to apply point filtering to exponent texture (to fetch it correctly) and bilinearing in the RGB channels... but this still expose too artifacts. I have another idea: splitting the 16 bit per channel texture in two 8 bit per channel textures (low byte & high byte texture) and apply DXT1 compression to both. Total: 1 byte per texel! Problem: an error in the "low byte" texture is not very noticeable. But an error in the "high byte" texture (because DXT1 is a lossy compression as you know) is quite devastating... HELL! A solution to the problem is to store the low byte texture compressed with DXT1 and the high byte texture uncompressed. Total: 3,5 bytes per texel. Here bilinearing filters don't make problems. Good quality, but It's not a great compression... I will be happy if I can compress it a bit more (up to 2 bytes per texel or less. I'm very happy if you cool guys can suggest me others methods. Thanks in advance, - AGPX

Share this post


Link to post
Share on other sites
Advertisement
Actually I do the RGBE encoding with the following code:

int encode(float r, float g, float b,
float &encodedR,
float &encodedG,
float &encodedB)
{
const float maxComponent = max(max(r, g), b);
const int exp = (int)ceilf(log2f(maxComponent));
const float divisor = powf(2.0f, exp);
encodedR = r / divisor;
encodedG = g / divisor;
encodedB = b / divisor;
return exp;
}

So: r = encodedR * 2^exp, g = encodedG * 2^exp and b = encodedB * 2^exp;

I found that it works better if I return exp * 16 (instead of exp), but in high contrast edge some Halo effects appear.

Anyway, I dunno if is convenient to do bilinear filtering manually in the pixel shader...
This sound too nasty and slow...

The following image is a shot from my editor. I use RGBE normally (without multiply by 16):



The following is with the exponent multiplication:



The last show halo and artifacts (again with multiplication by 16):



Exponent is clipped between in the range [-8, 7] when multiplied by 16, so that it's in the range [-128, 112]. I also sum 128 to remove the sign ([0, 240]). So it can stay in a single byte. Yes, dynamic range is less than previous, but images look really better.

I don't understand the nature of the artifacts, at all. Halo should be due to bilinear filtering. The problem is that filtering should happen AFTER decoding and not before.

Thanks,

- AGPX

Share this post


Link to post
Share on other sites
Hi again,

I have fixed the artifact problem. It's due to DXT1 compression of the RGB channels (not the exponent). However, halo still occurs...
Actually, I have two textures: one for RGB and the other for exponent.
I have tried to apply bilinear filtering only on the RGB, disabling it in the exponent. Here the result:



The halo is more evident.

The problem is that mantissas related to two quite different exponent, cannot be interpolated linearly. ;(

My research still continue...

Helps are greatly appreciated, thanks.

- AGPX

Share this post


Link to post
Share on other sites
Hi again,

I will become crazy... without doubts. :wacko:

I have changed my method. I split my 16 bit integer texture (A16R16G16B16) in two 8-bit per channel texture (one for the low byte, and the other for the high byte).
The low texture is compressed with DXT3 compression. The other one is stored uncompressed (A8R8G8B8). According to my calculations, this method should be immune from bilinear filtering issue. And it is... but only with Reference Rasterizer! With HAL device, the thing don't works. :blink:
WHY??????

Take a small look to the pixel shader I used to reassemble the two 8 bit textures in a 16 bit one:

sampler2D lightMapLow;
sampler2D lightMapHigh;

float4 Expose(in float4 rgba, float exposure)
{
return 1 - exp(-rgba * exposure);
}

float4 texHDR2D(in sampler2D texLow, in sampler2D texHigh, in float2 tex)
{
return tex2D(texLow, tex) + tex2D(texHigh, tex) * 256.0;
}

float4 ps_main(float2 inTex: TEXCOORD0) : COLOR0
{
return Expose(texHDR2D(lightMapLow, lightMapHigh, inTex), 1.2);
}

And now take a look at the results using Reference Rasterizer:



And the following is with HAL device on Sapphire Radeon 9800 Pro (128 Mb, 256 bit):



HOW THE HELL IS THIS POSSIBLE?

Please help me!

P.S.: I have tried to use A8R8G8B8 for low texture instead of DXT3, but NONE change!

P.S.2: Works well on HAL only if I switch from bilinear filtering to point filtering!

P.S.3: Here you can see the low & high textures generated by my lightmapper:

LOW:



HIGH:



The high is not totally 0, they have some pixel set to 1,2 and 3. The textures appears to be ok.

[Edited by - AGPX on September 10, 2005 5:43:30 PM]

Share this post


Link to post
Share on other sites
Hi,

all the problems are now solved.

I post here my solution to support other peoples with a similar problem.
Basically, the problem is due to the high interpolators imprecision.
Reference rasterizer is preciser than HAL device.
So basically, there are nothing to do: you have to perform POINT filtering and make bilinear filtering via pixel shader. Here the code to perform the filtering (gently posted to me by a guy on #flipcode. Thanks to you!)

Here the pixel shader:

float4 bilinear(in sampler2D s, in float2 t)
{
const float2 wh = {256.0, 256.0};
const float2 dwh = 1 / wh;
float2 dxy = t * wh - floor(t * wh);
float4 a = lerp(tex2D(s, t + float2(0, 0)), tex2D(s, t + float2(dwh.x, 0)), dxy.x);
float4 b = lerp(tex2D(s, t + float2(0, dwh.y)), tex2D(s, t + float2(dwh.y, dwh.y)), dxy.x);
return lerp(a, b, dxy.y);
}

This way, also the RGBE encoding works well, so I switch to it, because it requires less storage memory. Oh, well, at least... only if I can compress it with DXT1-5 without introduction of artifacts... I will investigate on that tomorrow and, finally, I think that I'll write a tutorial on HDR compression... could be helpful, until hardware manufacturer will introduce compression for 16 bit textures! (to say all the truth some 16-bit FourCC formats exists, but are largely not implemented).

Thanks to all for help. :D

Share this post


Link to post
Share on other sites
Quote:
Original post by AGPX
And now take a look at the results using Reference Rasterizer:



And the following is with HAL device on Sapphire Radeon 9800 Pro (128 Mb, 256 bit):




did you try this out on a GeForce FX/6? I'm really interested to see how it'll look...

Share this post


Link to post
Share on other sites
Hi,

I haven't nVidia video cards, but I think that they work better than ATI, because seems that nVidia use full 32 bits as internal representation. ATI, instead, use only 24 bits.

Bye.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!