Sign in to follow this  
masterbubu

RGBA to Float - Percision

Recommended Posts

HI,

 

I have a conflict with the well known approach for encoding/decoding float <->RGBA

 

The web is full with hlsl/glsl samples of doing it.

 

If have implemented the following hlsl (Taken from Unity ) in c++ and test it out:

// Encoding/decoding [0..1) floats into 8 bit/channel RGBA. Note that 1.0 will not be encoded properly.
inline float4 EncodeFloatRGBA( float v )
{
	float4 kEncodeMul = float4(1.0, 255.0, 65025.0, 16581375.0);
	float kEncodeBit = 1.0/255.0;
	float4 enc = kEncodeMul * v;
	enc = frac (enc);
	enc -= enc.yzww * kEncodeBit;
	return enc;
}

inline float DecodeFloatRGBA( float4 enc )
{
	float4 kDecodeDot = float4(1.0, 1/255.0, 1/65025.0, 1/16581375.0);
	return dot( enc, kDecodeDot );
}

I'm normalizing all numbers into 8 bit range. 

 

 

RGBA = 61,0,0,191 --> float values (divided by 255 )  [0.239215687,0,0,0.749019623]

Encoding worked properly.

 

Then I start to higher the R component to 66 (float val = 0.247058824 ).

 

When encoding the 66,0,0,191 the result is wrong. The .A component received wrong value (0.0).

 

Obviously there is a precision lost, as when the code was tested with doubles, the problems was not happened.

 

My question: As this approach is so common, mostly used on deferred rendering for pack the normal's and depth into RGBA texture (32bit), how this problem avoided?

 

Am I missing something?

 

 

 

Share this post


Link to post
Share on other sites

Assuming you have GLSL 3.30 or newer, this might solve your problem:

 

https://www.opengl.org/sdk/docs/man4/html/floatBitsToInt.xhtml

https://www.opengl.org/sdk/docs/man4/html/intBitsToFloat.xhtml

 

EDIT: Although this is just packing int <-> float. Which means you can store 4 8-bit things in float, and then restore them. Like:

float encode(float4 color)
{
    int rgba = (int(color.x * 255.0) << 24) + (int(color.y * 255.0) << 16) + (int(color.z * 255.0) << 8) + int(color.w * 255.0);
    return intBitsToFloat(rgba);
}

float4 decode(float value)
{
    int rgba = floatBitsToInt(value);
    float r = float(rgba >> 24) / 255.0;
    float g = float((rgba & 0x00ff0000) >> 16) / 255.0;
    float b = float((rgba & 0x0000ff00) >> 8) / 255.0;
    float a = float(rgba & 0x000000ff) / 255.0;
    return float4(r, g, b, a);
}

Note, there might be some typos (as I wrote this in a hurry), but you should get idea from this.

Edited by Vilem Otte

Share this post


Link to post
Share on other sites

As this approach is so common, mostly used on deferred rendering for pack the normal's and depth into RGBA texture (32bit), how this problem avoided?

It's not common.
It used to be used in the first deferred renderers because at the time:
* GPU's sucked at integer math and bitwise operations, forcing you to do all your packing via floating point math.
* Some GPU's required all currently bound render-targets to share the same format -- so if you used an RGBA8 for colour, you couldn't use an R32 or an RG16 for normals/etc at the same time.
 
These days, it's perfectly fine to write bitwise/integer packing routines, and you can also use different texture formats for each 'layer' of your gbuffer.

 

FWIW, I've only ever used the 24bit version of that float->RGB packing routine, not the 32bit one that you've posted :)

 

[edit] Also you shouldn't write depth into your gbuffer -- you should simply read from the actual zbuffer afterwards.

Edited by Hodgman

Share this post


Link to post
Share on other sites

i just put code here for 32 bit precision.dont compare 32bit float to 64 bit float


vec4 pack(float depth)
{
    const vec4 bitSh = vec4(256.0 * 256.0 * 256.0,
                       256.0 * 256.0,
                       256.0,
                      1.0);
    const vec4 bitMsk = vec4(0,
                         1.0 / 256.0,
                         1.0 / 256.0,
                             1.0 / 256.0);
    vec4 comp = fract(depth * bitSh);
    comp -= comp.xxyz * bitMsk;
    return comp;
}


float unpack(vec2 pos)
{

    vec4 packedZValue = texture2D(shadow_map, pos); //read rgba from texture still glsl converts that to 0..1,0..1,0..1,0..1

    const vec4 bitShifts = vec4(1.0 / (256.0 * 256.0 * 256.0),
                        1.0 / (256.0 * 256.0),
                        1.0 / 256.0,
                        1);
    float shadow = dot(packedZValue , bitShifts);
 
    return shadow;
}

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this