Texture with D3DFMT_L16 format + shader

Started by
1 comment, last by dario_ramos 13 years, 11 months ago
Say I have a texture used to display a 16-bit monochrome image; that's why I use the D3DFMT_L16 format when creating the texture. Now, say I want to apply a pixel shader on that texture. The pixel shader receives a RGB value comprised of three floats. Am I losing information when transforming a 16-bit value to 3 floats? Can I have any control over that transformation? My aim is controlling brightness and contrast with maximum precision, and that's why I have to make sure I don't lose info.
Advertisement
The conversion from fixed-point to floating-point is done in the texture sampling hardware, which will differ a bit among GPU's. The only control you have is over the sampler states exposed in D3D, like the filter type and sRGB conversion. In terms of precision, a 32-bit float should be plenty since the value will be in the [0, 1] range (and that's where floating-point numbers have the most precision).
Alright, so if it depends on the hardware, I'll make a test with a special image to see if it's sampling correctly. Thanks!

This topic is closed to new replies.

Advertisement