Hi,
my question is related (more or less the same) to this question:
https://devtalk.nvidia.com/default/topic/887223/float-to-unorm-conversion/
When I write a program in DirectX or OpenGL which renders into a 32-bit, 4-channel (4x8-bit) non-srgb texture of unorm unsigned char type, the conversion that should be used (according to the OpenGL documentation) is:
u = round(f * 255.0)
however if I write something like:
65.55 / 255.0 in f, I still get 65 for u, and only for 65.56 I get 66. I tested on an NVidia 780 Ti and an NVidia 980 graphics card. Could somebody explain this behavior?
Thank you in advance.
Kind regards,
Muad'Dib