Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualGumgo

Posted 15 October 2012 - 03:55 PM

I'm using RGBA16F textures for my framebuffer already for HDR, and I'm trying to pack normal/depth into one texture. Right now I'm putting nx and ny into r and g, and it would be nice if I could put (linear depth)*(sign(nz)) into b and a.

EDIT: I found a paper describing how one would deal with special value cases when converting to/from 16-bit floats. Using the information here, I modified my code (and also fixed the bias issue):
[source lang="plain"] // first we interpret as integers so we can work with the bits uint bits = floatBitsToUint( normalZDepth ); uvec2 parts = uvec2( bits >> 16, // the higher 16 bits bits & 0x0000ffff ); // the lower 16 bits // each component's lower 16 bits now contain the lower and higher bits from the original value // we want these bits to remain the same when put in 16-bit floats. // we do this by putting these into normal 32-bit floats such that when these // 32-bit floats are converted into 16-bit floats, the important bits will be all that remain // 32-bit float: [ 1 (sign) | 8 (exponent) | 23 (mantissa) ] bias = 127 // 16-bit float: [ 1 (sign) | 5 (exponent) | 10 (mantissa) ] bias = 15 // the full conversion is: // int16 ==> float ==> half ==> float ==> int16 // therefore, we must ensure that the set of "important" bits in each representation remains unchanged // the following cases can occur: // int16 ==> float ==> half ==> float ==> int16 // inf/NaN: s 11111 mmmmmmmmmm ==> s 11111111 mmmmmmmmmm0000000000000 ==> s 11111 mmmmmmmmmm ==> s 11111111 mmmmmmmmmm0000000000000 ==> s 11111 mmmmmmmmmm // zero/denorm: s 00000 mmmmmmmmmm ==> s 00000000 mmmmmmmmmm0000000000000 ==> s 00000 mmmmmmmmmm ==> s 00000000 mmmmmmmmmm0000000000000 ==> s 00000 mmmmmmmmmm // normal: s eeeee mmmmmmmmmm ==> s EEEEEEEE mmmmmmmmmm0000000000000 ==> s eeeee mmmmmmmmmm ==> s EEEEEEEE mmmmmmmmmm0000000000000 ==> s eeeee mmmmmmmmmm // note that in the normal case, the exponent is always in the range [-14,15] // (EEEEEEEE represents the exponent after adjusted bias, which is still in the proper range [-14,15]) // in all cases, the sign bit and the mantissa remain unmodified // therefore, we first copy them directly over uvec2 floatBits = ((parts & 0x8000) << 16) | ((parts & 0x03FF) << 13); // now we deal with the different cases (0x7C00 is the 16-bit mantissa mask, 0x7F800000 is the 32-bit mantissa mask) uvec2 exp16 = floatBits & 0x7C00; // simplified ((exp16 >> 10) - 15 + 127) << 23 uvec2 exp32 = (exp16 - 0x1C00) << 13; bvec2 expIs00000 = equal( exp16, uvec2( 0, 0 ) ); bvec2 expIs11111 = equal( exp16, uvec2( 0x7C00, 0x7C00 ) ); if (expIs00000.x) exp32.x = 0; if (expIs00000.y) exp32.y = 0; if (expIs11111.x) exp32.x = 0x7F800000; if (expIs11111.y) exp32.y = 0x7F800000; floatBits |= exp32; // now just interpret as float - ready to be stored as half floats vec2 halfBits = uintBitsToFloat( floatBits );[/source]
Untested, but in theory this should work (but of course, only if hardware actually abides by these rules). Does the OpenGL 3.3 spec define how these conversions should occur? Currently trying to find out...

#2Gumgo

Posted 15 October 2012 - 12:31 PM

I'm using RGBA16F textures for my framebuffer already for HDR, and I'm trying to pack normal/depth into one texture. Right now I'm putting nx and ny into r and g, and it would be nice if I could put (linear depth)*(sign(nz)) into b and a.

EDIT: I found a paper describing how one would deal with special value cases when converting to/from 16-bit floats. Using the information here, I modified my code (and also fixed the bias issue):
[source lang="plain"] // first we interpret as integers so we can work with the bits uint bits = floatBitsToUint( normalZDepth ); uvec2 parts = uvec2( bits >> 16, // the higher 16 bits bits & 0x0000ffff ); // the lower 16 bits // each component's lower 16 bits now contain the lower and higher bits from the original value // we want these bits to remain the same when put in 16-bit floats. // we do this by putting these into normal 32-bit floats such that when these // 32-bit floats are converted into 16-bit floats, the important bits will be all that remain // 32-bit float: [ 1 (sign) | 8 (exponent) | 23 (mantissa) ] bias = 127 // 16-bit float: [ 1 (sign) | 5 (exponent) | 10 (mantissa) ] bias = 15 // the full conversion is: // int16 ==> float ==> half ==> float ==> int16 // therefore, we must ensure that the set of "important" bits in each representation remains unchanged // the following cases can occur: // int16 ==> float ==> half ==> float ==> int16 // inf/NaN: s 11111 mmmmmmmmmm ==> s 11111111 mmmmmmmmmm0000000000000 ==> s 11111 mmmmmmmmmm ==> s 11111111 mmmmmmmmmm0000000000000 ==> s 11111 mmmmmmmmmm // zero/denorm: s 00000 mmmmmmmmmm ==> s 00000000 mmmmmmmmmm0000000000000 ==> s 00000 mmmmmmmmmm ==> s 00000000 mmmmmmmmmm0000000000000 ==> s 00000 mmmmmmmmmm // normal: s eeeee mmmmmmmmmm ==> s EEEEEEEE mmmmmmmmmm0000000000000 ==> s eeeee mmmmmmmmmm ==> s EEEEEEEE mmmmmmmmmm0000000000000 ==> s eeeee mmmmmmmmmm // note that in the normal case, the exponent is always in the range [-14,15] // (EEEEEEEE represents the exponent after adjusted bias, which is still in the proper range [-14,15]) // in all cases, the sign bit and the mantissa remain unmodified // therefore, we first copy them directly over uvec2 floatBits = ((parts & 0x8000) << 16) | ((parts & 0x03FF) << 13); // now we deal with the different cases (0x7C00 is the 16-bit mantissa mask, 0x7F800000 is the 32-bit mantissa mask) uvec2 exp16 = floatBits & 0x7C00; // simplified ((exp16 >> 10) - 15 + 127) << 23 uvec2 exp32 = (exp16 - 0x1C00) << 13; bvec2 expIs00000 = equal( exp16, uvec2( 0, 0 ) ); bvec2 expIs11111 = equal( exp16, uvec2( 0x7C00, 0x7C00 ) ); exp32 = mix( exp32, 0, expIs0000 ); exp32 = mix( exp32, 0x7F800000, expIs1111 ); floatBits |= exp32; // now just interpret as float - ready to be stored as half floats vec2 halfBits = uintBitsToFloat( floatBits );[/source]
Untested, but in theory this should work (but of course, only if hardware actually abides by these rules). Does the OpenGL 3.3 spec define how these conversions should occur? Currently trying to find out...

#1Gumgo

Posted 15 October 2012 - 11:03 AM

I'm using RGBA16F textures for my framebuffer already for HDR, and I'm trying to pack normal/depth into one texture. Right now I'm putting nx and ny into r and g, and it would be nice if I could put (linear depth)*(sign(nz)) into b and a.

PARTNERS