# OpenGL float to unorm conversion

This topic is 851 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

my question is related (more or less the same) to this question:

https://devtalk.nvidia.com/default/topic/887223/float-to-unorm-conversion/

When I write a program in DirectX or OpenGL which renders into a 32-bit, 4-channel (4x8-bit) non-srgb texture of unorm unsigned char type, the conversion that should be used (according to the OpenGL documentation) is:

u = round(f * 255.0)

however if I write something like:

65.55 / 255.0 in f, I still get 65 for u, and only for 65.56 I get 66. I tested on an NVidia 780 Ti and an NVidia 980 graphics card. Could somebody explain this behavior?

Kind regards,

##### Share on other sites
8 bits in range 0...1 is not enough to precisely store 65.55/255.0 .

In general, you should not assume anything about precision when handling floating-point values and their conversions to fixed point. All you are guaranteed to get is an approximation of the correct result.

##### Share on other sites

Of course, but as the OpenGL documentation says, the stored value should be computed using the formula:

u = round(f * 255.0)

so, what should happen is the following:

float temp1 = 65.55f / 255.0f;
float temp2 = roundf(temp1 * 255.0f);
unsigned char storedValue = (unsigned char)temp2;

so casting the value to unsigned char should be the last step in the execution. (If you compile this with a C-compiler and execute it, you indeed get 66 as expected.)

##### Share on other sites
If you're doing everything right, then that sounds like a NV bug :(

1. 1
2. 2
Rutin
20
3. 3
khawk
16
4. 4
A4L
14
5. 5

• 11
• 16
• 26
• 10
• 11
• ### Forum Statistics

• Total Topics
633755
• Total Posts
3013706
×