Sign in to follow this  
Muad'Dib

OpenGL float to unorm conversion

Recommended Posts

Hi,

 

my question is related (more or less the same) to this question:

 

https://devtalk.nvidia.com/default/topic/887223/float-to-unorm-conversion/

 

When I write a program in DirectX or OpenGL which renders into a 32-bit, 4-channel (4x8-bit) non-srgb texture of unorm unsigned char type, the conversion that should be used (according to the OpenGL documentation) is:

 

u = round(f * 255.0)

 

however if I write something like:

 

65.55 / 255.0 in f, I still get 65 for u, and only for 65.56 I get 66. I tested on an NVidia 780 Ti and an NVidia 980 graphics card. Could somebody explain this behavior?

 

Thank you in advance.

 

Kind regards,

 

Muad'Dib

Share this post


Link to post
Share on other sites
8 bits in range 0...1 is not enough to precisely store 65.55/255.0 .

In general, you should not assume anything about precision when handling floating-point values and their conversions to fixed point. All you are guaranteed to get is an approximation of the correct result.

Share this post


Link to post
Share on other sites

Of course, but as the OpenGL documentation says, the stored value should be computed using the formula:

 

u = round(f * 255.0)

 

so, what should happen is the following:

 

float temp1 = 65.55f / 255.0f;
float temp2 = roundf(temp1 * 255.0f);
unsigned char storedValue = (unsigned char)temp2;
 
so casting the value to unsigned char should be the last step in the execution. (If you compile this with a C-compiler and execute it, you indeed get 66 as expected.)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this