Jump to content
  • Advertisement
Sign in to follow this  
jaafit

Problem Converting Float to RGBA

This topic is 3942 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm having precision problems in my HLSL PS code that converts from a floating point number (range = [0.0, 1.0]) to RGBA and then back to float. I use floating point textures for cards that support it and things work great there, but I must now get this algo to work for the rest of the cards. My packing algorithm is Ysaneya's (link):
float4 packFactors = float4( 256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
float4 bitMask = float4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);

float4 Pack( float rawValue )
{	
	float4 packedValue = float4(frac(packFactors*rawValue));
	packedValue -= packedValue.xxyz * bitMask;
	return packedValue;
}



This returns a float4 which I then return from the PS as the color to a texture. Upon reading the texture, I use this code:
float4 unpackFactors = float4( 1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0 );

float Unpack(float4 encodedValue)
{
	return dot(encodedValue,unpackFactors);
}



This comes very close to the correct value but is off by as much as 1e-4 in some cases. Since I'm converting this to a dimension of 20 miles, that causes some serious artifacts. Duplicating the calculations myself yields a precision error of only 1e-10, which would be more than satisfactory for this application. What could explain this? What is the GPU doing differently from my manual calcuations?

Share this post


Link to post
Share on other sites
Advertisement
What video card is this on, and which pixel shader version?

On some Pixel Shader 2 cards the floating point values in the pixel shader are only 24 bit. I believe they are always 32-bit for Pixel Shader 3, but I'm not absolutely certain.

Share this post


Link to post
Share on other sites
The 8800 GTS has to be full FP32 precision, so that shouldn't be an issue. Adam_42 is correct though, ATI's pre-SM3 hardware was only 24bit whereas all Nvidia's since Gf6 have been 32bit. All SM3 hardware must be 32bit.

If you suspect it could be the hardware you really should check it through with the reference rasterizer. The 8800 should have good precision/quality characteristics so I'd be surprised if it were the hardware.

Probably also worth running it through PIX and debug an individual pixel you know to be correct and one you know to be incorrect - step through and see if you can identify which operation is causing the precision problems.

hth
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!