Sign in to follow this  
Burnhard

Luminance texture format L16

Recommended Posts

I'm using L16 format to store a texture. Data comes in to me as 16 bit instrument values, so I'm going to use a shader to do a palette lookup to colourise it. i.e. I store the palette in a 256x256 size texture and then use the luminance of the first texture to give me the u,v coordinate lookup value into the palette texture. I'm working on the assumption that the L16 pixel format will give me the high byte of the 16 bit value in one channel and the low byte in another, but I'm not sure this is the case. My shader is quite simple (assembler), as follows:
static const char shader[] =
	{
		"ps.1.1			// Shader version 1.1							\n"		"tex t0			// Sample the L16 on channel 0					\n"		"texreg2ar t1, t0	// Index low,high into palette texture on t1	\n"		"mov r0, t1		// Output the result							\n"
	};

The output seems to be wrong and using PIX to debug doesn't seem to give me a whole lot of clues. I've visually verified that the L16 texture looks right and that the palette texture looks right by painting a quad with them. What daft mistake have I made here? Here are the textures and the result drawn onto a quad: http://i39.tinypic.com/2q0msmq.png

Share this post


Link to post
Share on other sites
About three things jump out at me right now--
1) Is there a reason why you aren't using HLSL here (you can compile to ps1.1 with fxc, though you have to use the legacy compiler if you're working with a recent DXSDK)

2) texreg2ar (had to look at the MSDN docs for that!) uses alpha *and* red, L16 only provides one or the other; (I believe red, though it could be alpha) using it like this is probably undefined behavior.

3) Assuming you are working on extremely limited hardware, you should probably just be doing a HSV transform in the shader and save all that texture memory :)

EDIT: Whoops, excuse me. What I was getting at for 3 is interpreting the luminance value as a 'hue' value with full saturation and arbitrary brightness and then transforming it back to RGB.

Share this post


Link to post
Share on other sites
I managed to solve the problem. The details are here:

http://forums.xna.com/forums/t/52714.aspx

It turns out if I use .gb as (u,v) instead of .ar, it works. I'm not sure where I got the idea it was .ar. Something I remembered from back in 2003 playing with my GeForce 3 - probably a false memory....

I managed to hack some HLSL together (see the above link) by modding the .fx file from the basic HLSL tutorial with the SDK. Compiling for 1_1 is also a problem for me (I've got very limited experience of using shaders most of which was gained years ago before HLSL was available). I suppose I just wanted the simplest method possible, given that the entire engine is FF except this one colour space conversion.

So, how do you compile 1_1? Can you use HLSL without using effects? How do you deploy the HLSL as an embedded resource with C++?

Share this post


Link to post
Share on other sites
Hmmmm, it seems I was wrong. It just looked like it worked. When I count the number of unique colours in the image it's 256! That's effectively a straight line through the palette, i.e. (val / 256) or the low byte of the L16 work being cut off.

To get it working, I've gone for ARGB luminance texture, filling in G and B with the 16 bit value. This gives me several thousand unique colours in the image, which is what I was expecting. I'm not sure therefore what the point of the L16 format is if I can't get the actual 16 bit value into the shader!

Either that or I'm still doing something wrong. I can't find any documentation anywhere that tells me what a sampled L16 pixel looks like or how it behaves when it comes into the shader.

Share this post


Link to post
Share on other sites
Quote:
Original post by Burnhard
Hmmmm, it seems I was wrong. It just looked like it worked. When I count the number of unique colours in the image it's 256! That's effectively a straight line through the palette, i.e. (val / 256) or the low byte of the L16 work being cut off.

To get it working, I've gone for ARGB luminance texture, filling in G and B with the 16 bit value. This gives me several thousand unique colours in the image, which is what I was expecting. I'm not sure therefore what the point of the L16 format is if I can't get the actual 16 bit value into the shader!

Either that or I'm still doing something wrong. I can't find any documentation anywhere that tells me what a sampled L16 pixel looks like or how it behaves when it comes into the shader.


Sorry to vanish here. Anyways, I'd be willing to bet that DX8-class hardware does not offer support for formats with greater than 8bpc precision and this limitation is emulated when working with SM1.1-SM1.4 in order to retain backwards compatibility. This is really just best-guess, as I don't have a whole lot of experience with pre-SM2.0 hardware. Have you tried experimenting with different shader models?

Share this post


Link to post
Share on other sites
So in your opinion my ATI 5700 series card would be emulating this failure? I'm developing with a DX11 series card, using DX9 for older cards that support shader 1.1 or maybe shader 2.0 if 1.1 is out of the question.

Share this post


Link to post
Share on other sites
Quote:
Original post by Burnhard
So in your opinion my ATI 5700 series card would be emulating this failure? I'm developing with a DX11 series card, using DX9 for older cards that support shader 1.1 or maybe shader 2.0 if 1.1 is out of the question.


Welcome to standardization, unfortunately. :\ Consistency is the goal, here, and if the baseline spec says that things should be computed in low-precision, that's how it's going to work. Obviously I can't *guarantee* that this is the problem, but it's not a bad guess. You could try getting fancy with splitting computations across multiple color channels in your shader(s) and working from there, i.e. the classic encode-depth-as-color trick.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this