I'm having precision problems in my HLSL PS code that converts from a floating point number (range = [0.0, 1.0]) to RGBA and then back to float. I use floating point textures for cards that support it and things work great there, but I must now get this algo to work for the rest of the cards.
My packing algorithm is Ysaneya's (
link):
float4 packFactors = float4( 256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
float4 bitMask = float4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
float4 Pack( float rawValue )
{
float4 packedValue = float4(frac(packFactors*rawValue));
packedValue -= packedValue.xxyz * bitMask;
return packedValue;
}
This returns a float4 which I then return from the PS as the color to a texture.
Upon reading the texture, I use this code:
float4 unpackFactors = float4( 1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0 );
float Unpack(float4 encodedValue)
{
return dot(encodedValue,unpackFactors);
}
This comes very close to the correct value but is off by as much as 1e-4 in some cases. Since I'm converting this to a dimension of 20 miles, that causes some serious artifacts.
Duplicating the calculations myself yields a precision error of only 1e-10, which would be more than satisfactory for this application.
What could explain this? What is the GPU doing differently from my manual calcuations?