Bit packing for HDR textures

Started by
1 comment, last by Chris_F 11 years, 3 months ago

I'm currently doing some work in UDK and I want to be able to use HDR textures. The problem is that UDK doesn't support anything beyond RGBA8888, so 16-bit or float textures are out of the question. I've tried formats like RGBE, RGBM and LogLuv. LogLuv has probably given me the best results, but they all suffer from the same problem which is that they can't be linearly sampled without producing banding artifacts. It's easy to see why. In the case of LogLuv, the channel that contains the LSB portion of the luminance contains harsh transitions.

Example:

lsb.png

http://s20.postimage.org/5789pshwt/lsb.png

My question is whether or not there is some way of encoding a greater than 8-bit precision luminance into two 8-bit channels in such a way that neither portion contains these kinds of banding patterns with harsh transitions.

Advertisement

The common solution is to use two DXT textures, at least if you're going for compression. You should have a look at this, or this.

Unfortunately UDK does not allow you to import DDS textures, which means it's impossible to use a custom DXT5 compressor like they do. However, I could probably store it all in a single RGBA8 texture if I leave out W and reconstruct it in shader. Hopefully this would yield good results.

This topic is closed to new replies.

Advertisement