Problem Converting Float to RGBA

Started by
2 comments, last by jollyjeffers 16 years, 5 months ago
I'm having precision problems in my HLSL PS code that converts from a floating point number (range = [0.0, 1.0]) to RGBA and then back to float. I use floating point textures for cards that support it and things work great there, but I must now get this algo to work for the rest of the cards. My packing algorithm is Ysaneya's (link):

float4 packFactors = float4( 256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
float4 bitMask = float4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);

float4 Pack( float rawValue )
{	
	float4 packedValue = float4(frac(packFactors*rawValue));
	packedValue -= packedValue.xxyz * bitMask;
	return packedValue;
}



This returns a float4 which I then return from the PS as the color to a texture. Upon reading the texture, I use this code:

float4 unpackFactors = float4( 1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0 );

float Unpack(float4 encodedValue)
{
	return dot(encodedValue,unpackFactors);
}



This comes very close to the correct value but is off by as much as 1e-4 in some cases. Since I'm converting this to a dimension of 20 miles, that causes some serious artifacts. Duplicating the calculations myself yields a precision error of only 1e-10, which would be more than satisfactory for this application. What could explain this? What is the GPU doing differently from my manual calcuations?
Advertisement
What video card is this on, and which pixel shader version?

On some Pixel Shader 2 cards the floating point values in the pixel shader are only 24 bit. I believe they are always 32-bit for Pixel Shader 3, but I'm not absolutely certain.
I'm experiencing this problem on a GeForce 8800 GTS.
The 8800 GTS has to be full FP32 precision, so that shouldn't be an issue. Adam_42 is correct though, ATI's pre-SM3 hardware was only 24bit whereas all Nvidia's since Gf6 have been 32bit. All SM3 hardware must be 32bit.

If you suspect it could be the hardware you really should check it through with the reference rasterizer. The 8800 should have good precision/quality characteristics so I'd be surprised if it were the hardware.

Probably also worth running it through PIX and debug an individual pixel you know to be correct and one you know to be incorrect - step through and see if you can identify which operation is causing the precision problems.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

This topic is closed to new replies.

Advertisement