float to unorm conversion

Started by
2 comments, last by Hodgman 7 years, 8 months ago

Hi,

my question is related (more or less the same) to this question:

https://devtalk.nvidia.com/default/topic/887223/float-to-unorm-conversion/

When I write a program in DirectX or OpenGL which renders into a 32-bit, 4-channel (4x8-bit) non-srgb texture of unorm unsigned char type, the conversion that should be used (according to the OpenGL documentation) is:

u = round(f * 255.0)

however if I write something like:

65.55 / 255.0 in f, I still get 65 for u, and only for 65.56 I get 66. I tested on an NVidia 780 Ti and an NVidia 980 graphics card. Could somebody explain this behavior?

Thank you in advance.

Kind regards,

Muad'Dib

Advertisement
8 bits in range 0...1 is not enough to precisely store 65.55/255.0 .

In general, you should not assume anything about precision when handling floating-point values and their conversions to fixed point. All you are guaranteed to get is an approximation of the correct result.

Niko Suni

Of course, but as the OpenGL documentation says, the stored value should be computed using the formula:

u = round(f * 255.0)

so, what should happen is the following:

float temp1 = 65.55f / 255.0f;
float temp2 = roundf(temp1 * 255.0f);
unsigned char storedValue = (unsigned char)temp2;
so casting the value to unsigned char should be the last step in the execution. (If you compile this with a C-compiler and execute it, you indeed get 66 as expected.)
If you're doing everything right, then that sounds like a NV bug :(

This topic is closed to new replies.

Advertisement