Jump to content
  • Advertisement
Sign in to follow this  
InetRoadkill

CG packing 2 half types into a color type

This topic is 3732 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is there a trick to packing a half floating point type into a COLOR semantic? I'm trying to write data to the frame buffer as packed half types. But instead of packing the data, the GPU seems to be converting the half into a float and clamping the data to [0-1].

Share this post


Link to post
Share on other sites
Advertisement
With COLOR you mean the output from a fragment shader, right?

Normally this is a RGB(A)8 value, 3 or 4 bytes, clamped to 0..1 indeed. If you want to store floating point values, you can try a FBO(Fragment Buffer Object in OpenGL, don't know the name for DirectX). Using a FBO is like drawing on a texture, instead of directly to the screen. One of the nice things about FBO's is, is that you can make them 16 or even 32 bits (HDR). For examply, you could use the 16 bit floating point format. This will change nothing in your Cg shader by the way.

If you look for another way, you might need to sacrifice accuresy. It depends how many channels your COLOR output has, and how much of its bytes are available for your data. In case you only want to store that value, I'll suggest you spread it out over the 4 bytes. In that case you won't loose any accuresy at all.

Cg has functions to pack/unpack numbers into 2 or 4 values(the bytes). But... I don't remember the name anymore... 'unpack4' maybe? If I'm correct, its in the Cg manual. So, if you render on RGBA modus, you can store the first byte in the red channel, the second in the green, and so on. In case you have only 2 bytes available (maybe you want to store more stuff on the same buffer), you first need to scale your number to a 16 bits value, then unpack it to 2 bytes. See below for some info about about scaling.


If you only have 2 bytes, or even 1, you need to compress your number. For example, you want to store numbers that can go from -1000.0 to +1000.0 in a single byte:

compressedValue = (value + offset) * (maxDataSize / range)
compressedValue = (value + 1000) * (256 / 2000)
// Example, store -460.85
compressedValue = (-460.85 + 1000) * (256 / 2000) = 69.01 = 69 rounded

In this example I scale the number between 0..255. But maybe you need to scale it between 0 and 1 though. Anyway, later on you can decompress your data again by:

decompressed = compressedValue / (maxDataSize / range) - offset
decompressed = compressedValue / (256 / 2000) - 1000
// Example, -460.85 was compressed to 69.01
decompressed = 69 / (256 / 2000) - 1000 = -460.94

The lower the range, and/or the more bytes you got to store, the more accurate the value of course.

Greetings,
Rick

Share this post


Link to post
Share on other sites
I changed the program to use an FBO, but I'm still having trouble getting things to work. I've tried this:

glTexImage2D(GL_TEXTURE_2D, 0, 4,
128, 128, 0, GL_LUMINANCE_ALPHA, GL_FLOAT, NULL);

which is trying to work, but it looks like the internal format is still 8 bit ints being converted to a float. I haven't been able to find an internal format that doesn't cause opengl to barf when creating the FBO, so I'm using a generic internal format of "4". Any suggestions on how to get OGL to use floats (either 16 or 32 bit) for the internal texture format?

Share this post


Link to post
Share on other sites
>>glTexImage2D(GL_TEXTURE_2D, 0, 4,
128, 128, 0, GL_LUMINANCE_ALPHA, GL_FLOAT, NULL);

You use format '4'? Maybe you'd better try this:
GL_RGBA16F_ARB = 0x881A;
GL_RGB16F_ARB = 0x881B;
GL_ALPHA16F_ARB = 0x881C;

Also keep in mind that only the RGBA format works. Well, maybe that's is not true anymore for modern videocards / OpenGL versions, but the original nVidia/OpenGL paper told me that only the RGBA format was supported so far. Unfortunately, since we don't always need all the channels of course.

I ussually create my FBO's like this:

glCopyTexImage2D( GL_TEXTURE_2D, 0, // Target, level
GL_RGBA16F_ARB,
0, 0, // offset
width, height, // size
0 // border
);

This should produce a blanco texture in the 16 bit floating point format. Now you can use this texture as a render target.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Well it does look like the LUMINANCE_ALPHA floating point is not supported yet... at least not on my system. The error being returned is

GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT (0x8CD6)

I'm not sure what that means.

Share this post


Link to post
Share on other sites
Quote:
Original post by InetRoadkill
Well it does look like the LUMINANCE_ALPHA floating point is not supported yet... at least not on my system. The error being returned is

GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT (0x8CD6)

I'm not sure what that means.
From the FBO spec
Quote:
The framebuffer attachment point <attachment> is said to be "framebuffer attachment complete" if the value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT for <attachment> is NONE (i.e., no image is attached), or if all of the following conditions are true:
...
* If <attachment> is one of COLOR_ATTACHMENT0_EXT through COLOR_ATTACHMENTn_EXT, then <image> must have a color-renderable internal format.
...
So if changing the texture's internal format effects whether you receive that error, it seems that LUMINANCE_ALPHA is not yet a color-renderable internal format for your card.

And just so you know, those floating-point texture constants spek mentioned come from the GL_ARB_texture_float extension.

Share this post


Link to post
Share on other sites
Well I'm not sure what's going on, but I guess I'll just go with the rgb floats. It's strange though because glView is saying that the floating point LUM_ALPHA is a supported extension for my card.

Share this post


Link to post
Share on other sites
FBO behavior can be kind of wanky still.... on nVidia's cards (last I checked), the only way to get a depth and stencil buffer attached to an FBO was to use a depth-stencil(24bit depth, 8bit stencil) texture, so strictly speaking nVidia cards don't do the full FBO spec, but for most uses its good enough (glGenerateMipMaps is broken as well for any of the floating point type as well)... my point being that don't blindly trust the extension string when it comes to FBO support, be wary.

Share this post


Link to post
Share on other sites
The RGB float is working fine. It would have been nice to save some texture memory using the other format, but oh well.

My next question is whether or not if it's possible to turn off the clamping on the colors. I seem to remember seeing somewhere there was a way to get unclamped floating point colors.


EDIT:
Nevermind. I found it:

glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);

Boy, is that little function handy!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!