Advertisement Jump to content
Sign in to follow this  

16 bit float error

This topic is 3674 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am trying to use 16 bit floats to improve performance in a very memory bandwidth intensive task. I expected some error when dropping from 32 bit, but it is not randomly distributed. The error basically always accumulates to increase the values rather than randomly move them. I would be OK with a slight introduction of noise, but a slight change of brightness is unacceptable. The error for my calculation is about 0.2% with 32 bit, and 1.8% with 16 bit. 1.8% works out to ~5 increments of 8 bit RGB values, which is noticable. I think that in the translation to 16 bit floats from integers, something is truncated instead of rounded. Is there a way to compensate for this? If it were integers I'd add 0.5 / 256, but that is not going to work with floats. If it is a truncation issue, I can fix it for my CPU side calculations that are uploaded to the GPU. But what about for when I convert a normal integer texture to a 16 bit floating point?

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!