Jump to content
  • Advertisement
Sign in to follow this  
Chris_F

Packing Float into RGBA Texture

This topic is 1977 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I've looked around, and there is a lot of info to be found on this subject, but it is only for encoding floats in the range [0, 1], and I want to encode floats of arbitrary magnitude. Can you tell me if my method is sound?

 

On the CPU:

glm::ivec4 FloatToRGBA(float input)
{
    int exponent;
    float significand = frexp(input, &exponent);
    int sig_int = glm::round(significand * 16777215.0f);
    glm::vec4 output;
    output.x = exponent + 128;
    output.y = sig_int & 0x000000FF;
    output.z = (sig_int & 0x0000FF00) >> 8;
    output.w = (sig_int & 0x00FF0000) >> 16;
    return output;
}

 

On the GPU:

float RGBAtoFloat(float4 input)
{
    float significand = dot(input.yzw, float3(16711680.0f, 65280.0f, 255.0f)) / 16777215.0f;
    return ldexp(significand, input.x * 255.0f - 128.0f);
}
Edited by Chris_F

Share this post


Link to post
Share on other sites
Advertisement

Depends on what you mean by "sound." If you want to pack RGBA into a float like you would an int, there's a couple problems here. First, floats have 1 bit for sign, 8 bits for exponent, and 23 bits for the fractional part. You're not using all of the bits. You're trying to stuff 32-bits worth of information into fewer than 32 bits, so you're bound to lose some information. Second, you can potentially create an invalid float value (NaN) which, depending on your system and compiler, may result in a signaling NaN that can crash your program.

Share this post


Link to post
Share on other sites
Depends on what you mean by "sound." If you want to pack RGBA into a float like you would an int, there's a couple problems here. First, floats have 1 bit for sign, 8 bits for exponent, and 23 bits for the fractional part. You're not using all of the bits. You're trying to stuff 32-bits worth of information into fewer than 32 bits, so you're bound to lose some information. Second, you can potentially create an invalid float value (NaN) which, depending on your system and compiler, may result in a signaling NaN that can crash your program.

 

I think you are confused. I want to pack a float into a RGBA8, not pack an RGBA8/int into a float.

Share this post


Link to post
Share on other sites
I want to pack a float into a RGBA8

I'm going to assume by "RGBA8" you mean vec4 with RGBA/xyzw components in the range [0, 255], (if this isn't correct, you'll have to explain what you mean by RGBA8). FloatToRGBA is still only using 31 bits of the input, and if you want 32 bits of information in the end result, you're going to have to use all 32 bits.

 

not pack an RGBA8/int into a float.

Either way you look at it, RGBAtoFloat and FloatToRGBA are only using 31 bits of the 32 bit float. If you're okay with that, then I don't see any immediate problems (does anyone else?), but I wanted to make you aware of the fact that only 31 bits are being used.

Share this post


Link to post
Share on other sites

Don't you have floating-point channels available? That would be a better choice than manually packing/unpacking your IEEE float. At least I certainly know there are general-purpose floating-point texture formats in DirectX, OpenGL and OpenCL (one, two, three or four 32-bit channels, as needed) so why not glm? Or are you trying to save memory?

Edited by Bacterius

Share this post


Link to post
Share on other sites
I want to pack a float into a RGBA8

I'm going to assume by "RGBA8" you mean vec4 with RGBA/xyzw components in the range [0, 255], (if this isn't correct, you'll have to explain what you mean by RGBA8). FloatToRGBA is still only using 31 bits of the input, and if you want 32 bits of information in the end result, you're going to have to use all 32 bits.

 

not pack an RGBA8/int into a float.

Either way you look at it, RGBAtoFloat and FloatToRGBA are only using 31 bits of the 32 bit float. If you're okay with that, then I don't see any immediate problems (does anyone else?), but I wanted to make you aware of the fact that only 31 bits are being used.

 

 

Actually, I should probably have mentioned I don't need to suport negatives.

 

Don't you have floating-point channels available? That would be a better choice than manually packing/unpacking your IEEE float. At least I certainly know there are general-purpose floating-point texture formats in DirectX, OpenGL and OpenCL (one, two, three or four 32-bit channels, as needed) so why not glm? Or are you trying to save memory?

 

The software I'm using doesn't support FP16 or FP32 textures.

Share this post


Link to post
Share on other sites

If I understand you correctly, you want to take a float and encode it somehow in the 4 RGBA channels (8-bit each) and then read it back in the GPU as one float again?

If that's so, RGBE encoding could do (but you'll loose a lot of precision) or this trick could do. It may work for floats in the range [0; 1] but you can use a multiplier (divide by large number when converting to RGBA, multiply when getting the float back)

AFAIK there is no "perfect" solution that will preserve a lot of precision in this type of conversion, not at least on how GPUs work (assuming there are no integer arithmetic operations available).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!