• Advertisement
Sign in to follow this  

Texture with D3DFMT_L16 format + shader

This topic is 2846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Say I have a texture used to display a 16-bit monochrome image; that's why I use the D3DFMT_L16 format when creating the texture. Now, say I want to apply a pixel shader on that texture. The pixel shader receives a RGB value comprised of three floats. Am I losing information when transforming a 16-bit value to 3 floats? Can I have any control over that transformation? My aim is controlling brightness and contrast with maximum precision, and that's why I have to make sure I don't lose info.

Share this post


Link to post
Share on other sites
Advertisement
The conversion from fixed-point to floating-point is done in the texture sampling hardware, which will differ a bit among GPU's. The only control you have is over the sampler states exposed in D3D, like the filter type and sRGB conversion. In terms of precision, a 32-bit float should be plenty since the value will be in the [0, 1] range (and that's where floating-point numbers have the most precision).

Share this post


Link to post
Share on other sites
Alright, so if it depends on the hardware, I'll make a test with a special image to see if it's sampling correctly. Thanks!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement