HI,
I have a conflict with the well known approach for encoding/decoding float <->RGBA
The web is full with hlsl/glsl samples of doing it.
If have implemented the following hlsl (Taken from Unity ) in c++ and test it out:
// Encoding/decoding [0..1) floats into 8 bit/channel RGBA. Note that 1.0 will not be encoded properly.
inline float4 EncodeFloatRGBA( float v )
{
float4 kEncodeMul = float4(1.0, 255.0, 65025.0, 16581375.0);
float kEncodeBit = 1.0/255.0;
float4 enc = kEncodeMul * v;
enc = frac (enc);
enc -= enc.yzww * kEncodeBit;
return enc;
}
inline float DecodeFloatRGBA( float4 enc )
{
float4 kDecodeDot = float4(1.0, 1/255.0, 1/65025.0, 1/16581375.0);
return dot( enc, kDecodeDot );
}
I'm normalizing all numbers into 8 bit range.
RGBA = 61,0,0,191 --> float values (divided by 255 ) [0.239215687,0,0,0.749019623]
Encoding worked properly.
Then I start to higher the R component to 66 (float val = 0.247058824 ).
When encoding the 66,0,0,191 the result is wrong. The .A component received wrong value (0.0).
Obviously there is a precision lost, as when the code was tested with doubles, the problems was not happened.
My question: As this approach is so common, mostly used on deferred rendering for pack the normal's and depth into RGBA texture (32bit), how this problem avoided?
Am I missing something?