Sign in to follow this  
moeron

hex representation of floating-point numbers

Recommended Posts

moeron    326
I'm not real familiar with the Boost libraries, but I've been kind of stumped on an elegant way to dump a hexidecimal representation of a floating-point variable to a stream. I would need the full representation (leading zeros,etc). Does such a thing exist somewhere in Boost, or somewhere else? Thanks in advance moe.ron

Share this post


Link to post
Share on other sites
Conner McCloud    1135
Quote:
Original post by moeron
I'm not real familiar with the Boost libraries, but I've been kind of stumped on an elegant way to dump a hexidecimal representation of a floating-point variable to a stream. I would need the full representation (leading zeros,etc). Does such a thing exist somewhere in Boost, or somewhere else?

Thanks in advance
moe.ron

I'm not familiar with Boost either, but if you're careful with sizes you can do some cast trickery to get part of the way there.

double x = 5.6;
long long xPrime = *((long long*)&x); //this works in VC.NET2003
cout << hex << xPrime << endl;

That won't print leading zeros, but writing a function that prints all 8 bytes of a long long is pretty easy.

*edit: You could also use a union:

union DoubleBytes
{
double DoubleRep;
char ByteRep[sizeof(double)];
};

double x = 5.6;
long long xPrime = *((long long*)&x);
DoubleBytes bytes;
bytes.DoubleRep = x;
cout << hex << xPrime << endl;
for(int i = sizeof(double); i >= 0; --i)
cout << hex << bytes.ByteRep[i];
cout << endl;


*edit2:
Just noticed that you can still lose zeros, but they're in the middle now. If there's a 0x09 in the middle, only '9' gets printed. But it shouldn't be too hard to catch that. Also, you can template that union so it'll work for more than just doubles.

CM

[Edited by - Conner McCloud on June 3, 2005 7:14:54 PM]

Share this post


Link to post
Share on other sites
MaulingMonkey    1730
*cracks fingers*

Time to do some major code abuse...:

float f = 1.2f;
std::cout << ((void *)*((int *)&f)) << std::endl;


Of course, this is taking advantage of the fact that std::cout prints (void *) in 0x12345678 notation, which won't work so well with double.

Okay, let me bang up a boost::format example real quick...

...
...
...

Here we go:
double d = 987.0;
cout << format( "%1%%2%" )
% group( setfill('0') , hex , setw(sizeof(int)*2) , ((int *)&d)[0])
% group( setfill('0') , hex , setw(sizeof(int)*2) , ((int *)&d)[1])
<< endl;

Result:
00000000408ed800


That look right?!? Switching the value to 986.0 didn't seem to affect the value, although switching it to 100 did... I may have part of the matessa (sp?) but not the rest... assumes 1 char == 2 hex digits (hence the *2) and that sizeof(double) == 2*sizeof(int). You've been warned!!!

Share this post


Link to post
Share on other sites
pragma Fury    343
I'm partial to printf .. though I realize it's not C++..

float f = 1.2345;
printf("0x%08X", *(long*)&f);


or:
union FloatHex {
float fVal;
void* hexVal;
};

FloatHex hex;
hex.fVal = 1.2345;

printf("0x%08X", hex.hexVal);

// or if you'd rather use cout...
char szHex[9];
memset(szHex,0,9);
_snprintf(szHex, 8, "%08X", hex.hexVal);

cout << "0x" << szHex << endl;


Both will output 0x3F9DF3B6 with leading zeros (if there were any)


EDIT: Or, if we're dealing with doubles:

union DoubleHex {
unsigned __int64 hexVal;
double dVal;
};

DoubleHex hex;
hex.dVal = 1.2345;

printf("0x%016I64X", hex.hexVal);


Which outputs 0x3FF3C083126E978D.

Share this post


Link to post
Share on other sites
pragma Fury    343
Quote:
Original post by MaulingMonkey
*cracks fingers*

...

Here we go:
double d = 987.0;
cout << format( "%1%%2%" )
% group( setfill('0') , hex , setw(sizeof(int)*2) , ((int *)&d)[0])
% group( setfill('0') , hex , setw(sizeof(int)*2) , ((int *)&d)[1])
<< endl;

Result:
00000000408ed800


That look right?!? Switching the value to 986.0 didn't seem to affect the value, although switching it to 100 did... I may have part of the matessa (sp?) but not the rest... assumes 1 char == 2 hex digits (hence the *2) and that sizeof(double) == 2*sizeof(int). You've been warned!!!


The hex for double d = 987.0 SHOULD be 0000000000D88E40. So your low 4 bytes are being reversed. (and probably the upper ones too).

I'm assuming that's the correct hex, 'cause that's what was in buff in this test code:

unsigned char buff[8];
double d = 987.0;
memcpy(buff, &d, 8);


Of course, my code doesn't work right either.. trying to figure it out.

Edit:
Ok, this ain't pretty.. but I think it works...
union DoubleHex{
unsigned char buff[8];
double dVal;
};

DoubleHex dh;
dh.dVal = 987.0;

for(int n=0; n < 8; n++)
printf("%02X",dh.buff[n]);


Anothe Edit:
Ah! Of course.. it's the whole little endian issue. Casting double(987.0) to an unsigned __int64, unsigned int[2], unsigned short[4], and unsigned char[8] will yeild the following hex dumps:
unsigned __int64: 408ED80000000000
int high,int low: 00000000408ED800
4 shorts: 00000000D800408E
8 bytes: 0000000000D88E40


[Edited by - pragma Fury on June 4, 2005 11:11:01 AM]

Share this post


Link to post
Share on other sites
pragma Fury    343
Quote:
Original post by MaulingMonkey
Quote:
Original post by pragma Fury
I'm partial to printf .. though I realize it's not C++..


It's C++, it's just not type-safe, object-safe, net-safe, not clean :-p. Then again neither is the C++ stunts I'm pulling... *whistles innocently*


Hehe.. that's what I love about C++. It's so hard to abuse the newer high-level languages.

.. I usually refer to printf as a C function, as it's in the C runtime libraries.

Share this post


Link to post
Share on other sites
pragma Fury    343
K.. After researching this a bit more, I've come to the conclusion that my initial result was correct:


union DoubleHex {
double dVal;
unsigned __int64 hexVal;
};

DoubleHex hex;
hex.dVal = 987.0;

printf("%016I64X", hex.hexVal);

// OR

double d= 987.0;
printf("%016I64X", *(__int64*)&d);



This will output 408ED80000000000, and if we break it down according to the IEEE-754 standard, we get the binary representation:

0100000010001110110110000000000000000000000000000000000000000000

bit 63 is the sign bit, and it's 0, so our number is positive.
bits 52-62 represent the exponent field, and are "10000001000" (1032).
this value minus the bias of 1023 = 9.

Bits 0-51 contain the mantissa/significand
"1110110110000000000000000000000000000000000000000000"
This can be converted into a decimal value by using "-1sign_bit*1.mantissa*2exp"

So, -10*1 * (20+2-1+2-2+2-3+2-5+2-6+2-8+2-9)*29
= 29+28+27+26+24+23+21+20
= 512+256+128+64+16+8+2+1
= 987

VOILA!

Share this post


Link to post
Share on other sites
moeron    326
sorry for the delayed response, I just got back to work this morning =)
Thanks a lot for all of your ideas, I'm going to get cracking on them now, I'll let you all know how it turns out!


moe.ron

Share this post


Link to post
Share on other sites
Nemesis2k2    1045
For future reference, it's a lot easier to proof check this kind of stuff by firing up a decent hex editor, typing in a floating point number, and comparing the result. Let your computer work with floating point notation, don't do it by hand.

Share this post


Link to post
Share on other sites
pragma Fury    343
Quote:
Original post by Nemesis2k2
For future reference, it's a lot easier to proof check this kind of stuff by firing up a decent hex editor, typing in a floating point number, and comparing the result. Let your computer work with floating point notation, don't do it by hand.


Ah, but that would have been too easy, wouldn't it?

I could have also simply looked at the .asm generated by the compiler.

Besides, I learned so much more this way [smile]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this