signed LUMINANCE8_ALPHA8

Started by
0 comments, last by DaMuzza 18 years, 1 month ago
Under Direct3D, I store my normal maps in the format V8U8. This is a signed format so I in the pixel shader I can do: normal.xy = texture.RG; normal.z = (computed...) Under OpenGL, Nvidia recommend using the format GL_LUMINANCE8_ALPHA8. However, I'm having some troubles getting the correct results from it. I *think* that my problem lies with my texture generation rather than my pixel shader. For reference, the only difference in the pixel shader is that rather than doing: normal.xy = texture.RG; it becomes: normal.xy = texture.GA; My results are 'close' to being correct, and at first glance appeared to be fine, but on further examination, the normal values are not correct. I create my texture data like this: glTexImage2D( GL_TEXTURE_2D, miplevel, GL_LUMINANCE8_ALPHA8, Width, Height, 0, GL_LUMINANCE_ALPHA, GL_BYTE, // signed data ); There are two things I'm not sure about. Firstly, I cannot find anywhere whether GL_LUMINANCE8_ALPHA8 is a signed format. Will my pixel shader (in Cg) read the texture to be from -1 to 1, or 0 to 1? From observation, it appears to be signed, but it would be nice if I could find documentation somewhere to support that. Secondly, before calling glTexImage2D, I suspect I may need to use glPixelTransfer to tell OpenGL exactly how I want the texture data formatted. I've experimented with various values that I though made sense, but with no luck. Can anyone help? Alternatively, if anyone knows of any demo that generates a texture of format GL_LUMINANCE8_ALPHA8, then that could be useful too. Thanks
Advertisement
To add to this, I've discovered that glTexImage2D is discarding the sign bit on all the data that I pass in.
If I use GL_SIGNED_LUMINANCE8_ALPHA8_NV, rather than GL_LUMINANCE8_ALPHA8, then everything works fine.
I can find no version of GL_SIGNED_LUMINANCE8_ALPHA8_NV for ATI cards though.

Any ideas?

Thanks

This topic is closed to new replies.

Advertisement