• Advertisement
Sign in to follow this  

L16 texture not displaying correctly

This topic is 2740 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am loading a 16 bit gray scale image into an L16 directx texture. I have written a pixel shader that will convert the 16 bit value into an 8 bit value based on settings provided by the user. This texture is not displaying correctly on Nvidia Quadro FX 1500/1700 or 3450/4000 series graphics cards, but it does display correctly on the Geforce, Quadro FX 580, and ATI cards I have tried it on.

Does anyone have an idea of what could be causing this and if I can make it work?

Thanks,

Jared

Share this post


Link to post
Share on other sites
Advertisement
Does the card actually support D3DFMT_L16? Not all cards do. Assuming D3D9, you can check with:

pDevice->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, D3DFMT_X8R8G8B8,
0, D3DRTYPE_TEXTURE, D3DFMT_L16);

Where D3DFMT_X8R8G8B8 is the adapter format (Desktop format in windowed mode, display format in fullscreen).

Share this post


Link to post
Share on other sites
That method returns true. It seems that when the pixel shader queries for a value out of the texture, it has lost precision. As if it was using a half float instead of a float. When the texture is drawn, sections of the texture are the same shade of gray, those sections should very in color more than they are.

The pixel shader doesn't use float, and I am not specifying the use of partial precision when compiling the effect.

Share this post


Link to post
Share on other sites
Sounds like it could be a driver issue.

Have you tried it with the reference rasterizer to make sure that gives the results you expect?

Share this post


Link to post
Share on other sites
yep. I have tried it with the reference rasterizer, and while it's very slow, it does look correct. I have downloaded the latest drivers as of September 2010 from Nvidia.

i am trying to get a hold of nvidia as well but so far haven't recieved any responses. If anyone knows of how to contact them, that would be helpful, or have other ideas about what this could be, they would be appreciated.

I'm currently trying to break the 16 bit grayscale image into an argb texture, placing only 4 bits into the upper bits of each byte in the argb, and then converting those values back to a single float. It work alright, but some weird artifacts occur in some ranges of data.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement