8-bit floats in Direct3D shaders
If you pass a texture (for example, of format D3DFMT_A8R8G8B8) to a pixel shader, each color channel is represented by a float normalized to the range [0.0, 1.0]. I was just wondering... how exactly is this implemented using 8 bits per channel?
Quote:Original post by Koiby258 bits gives you a range of 0..255. So to get that to 0..1, you just divide by 255 - which is what the GPU does; 0 -> 0, 255 -> 1.0, 128 -> 0.50196, etc.
If you pass a texture (for example, of format D3DFMT_A8R8G8B8) to a pixel shader, each color channel is represented by a float normalized to the range [0.0, 1.0]. I was just wondering... how exactly is this implemented using 8 bits per channel?
Not necessarily.
Shader 1.x gives about 9 bits precision, plus a sign, not full 32 bits. I don't remember if the extra bit allowed for values between 1 and 2 without overflowing, or if it was an extra lower value bit.
Shader 2.x and 3.x support both floatN and halfN, where float should be a full 32 bit float, and half should be a 16 bit float. nVidia 5xxx cards perform full 32 bit float operations slowly, and it was common to recommend using half unless you really needed the extra precision. In practice, all non 1.1 shaders were too slow on these 5xxx cards, and it's just easier to pretend they're limited to the fixed pipeline. ATI cards of the same era ignored what you asked for and performed all operations as a 24-bit float, which is why the ATI 9800 card held the performance crown until nVidia's next gen reclaimed it.
Shader 1.x gives about 9 bits precision, plus a sign, not full 32 bits. I don't remember if the extra bit allowed for values between 1 and 2 without overflowing, or if it was an extra lower value bit.
Shader 2.x and 3.x support both floatN and halfN, where float should be a full 32 bit float, and half should be a 16 bit float. nVidia 5xxx cards perform full 32 bit float operations slowly, and it was common to recommend using half unless you really needed the extra precision. In practice, all non 1.1 shaders were too slow on these 5xxx cards, and it's just easier to pretend they're limited to the fixed pipeline. ATI cards of the same era ignored what you asked for and performed all operations as a 24-bit float, which is why the ATI 9800 card held the performance crown until nVidia's next gen reclaimed it.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement