Sign in to follow this  
dario_ramos

Texture with D3DFMT_L16 format + shader

Recommended Posts

Say I have a texture used to display a 16-bit monochrome image; that's why I use the D3DFMT_L16 format when creating the texture. Now, say I want to apply a pixel shader on that texture. The pixel shader receives a RGB value comprised of three floats. Am I losing information when transforming a 16-bit value to 3 floats? Can I have any control over that transformation? My aim is controlling brightness and contrast with maximum precision, and that's why I have to make sure I don't lose info.

Share this post


Link to post
Share on other sites
The conversion from fixed-point to floating-point is done in the texture sampling hardware, which will differ a bit among GPU's. The only control you have is over the sampler states exposed in D3D, like the filter type and sRGB conversion. In terms of precision, a 32-bit float should be plenty since the value will be in the [0, 1] range (and that's where floating-point numbers have the most precision).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this