• Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a \$50 Amazon gift card. Click here to get started!

# Decimal to Binary Conversion

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

No replies to this topic

### #1Rattrap  Members   -  Reputation: 2134

Like
0Likes
Like

Posted 30 March 2012 - 12:48 PM

I'm posting this in the General Programming Forums, but it may really belong in the Math Forum...

While working on a project involving reading GIS data from a file, I somehow got myself side-tracked onto writing some functions that convert between the native floating points values to their binary IEEE754 equivalent (without taking the floating point variables pointer and casting it to a matching size integer pointer).

So I started researching the conversion process, and found this website with a procedure.

My code starts by separating the integral and fractional parts using the C/C++ modf function.

/*
*	 FloatType is a template param = Input Type
*	 UnsignedIntegerType is a template param = Output Type
*	 MantissaBitCount is a template param stating the number of bits to use to store the mantissa in the output
*	 ExponentBitCount is a template param stating the number of bits to use to store the exponent in the output
*	 FindMostSignificantBit is a function that returns the highest flagged significant bit in an integer.
*
*
*	 MantissaPivot will eventually be translated to the exponent
*
*	 Unlike the example I am multiplying by 256 (instead of 2), allowing for 8 bits to be extracted after each iteration.
*/

const FloatType Zero(0);

const FloatType One(1);
const FloatType Factor(256);
const UnsignedIntegerType FactorSize(8);
int MantissaSize(-1);
int MantissaPivot(-1);

/* Used to test for infinity */
const bool GreaterThanZero(Input > Zero);
/* Used to test for overflow */
const bool GreaterThanOne(Input > One);

UnsignedIntegerType Mantissa(0);
do
{
Mantissa <<= FactorSize;

FloatType Integral;
const FloatType Fractional(std::modf(Input, &Integral));
/*    This fails with large Integrals    */
const UnsignedIntegerType ToMantissa(static_cast<UnsignedIntegerType>(Integral));

if(MantissaSize == -1)
{
if((ToMantissa == 0) && (GreaterThanOne == true))
{
/*	 Number was too large to cast to an int	 */
}

/*	 Initial Length	 */
const int MostSignificantBit(FindMostSignificantBit(ToMantissa));
MantissaSize = MostSignificantBit;
/*	 Number of bits before most significant bit	 */
MantissaPivot = (MostSignificantBit > 0) ? (MostSignificantBit - 1) : 0;
}
else
{
/*	 I don't know what to call this other than non-initial	 */
if((Mantissa == 0) && (ToMantissa != 0))
{
const int MostSignificantBit(FindMostSignificantBit(ToMantissa));
/*
*	 Number of bits before most significant bit
*	 At this point, we are dealing with the decmal and a negative
*	 number.  The means we start counting from the highest possible
*	 bit count the number of bits between it and the most siginifcant
*	 bit, including the most signficant bit itself.
*/
MantissaPivot -= (static_cast<int>(FactorSize) - MostSignificantBit + 1);
/*
*	 Don't count the leading 0's, since there was no significant values
*	 stored in the mantissa yet.
*/
MantissaSize += MostSignificantBit;
}
else
{
if(Mantissa == 0)
{
MantissaPivot -= FactorSize;
}
else
{
MantissaSize += FactorSize;
}
}
}
Mantissa |= ToMantissa;

Input = Fractional * Factor;
}
while((MantissaSize < MantissaBitCount) && (Input > Zero));
// More code follows for constructing the binary after it has been parsed into the mantissa and exponent



As on the example website, I try taking the Integral part and convert it to a native integer. The problem I'm having is if Integral part of the float is larger than what can be stored in a the output integer. Casting to a larger type is not really an option, since the code supports doubles and 64-bit integers. Any ideas on how to convert these large values into a usable mantissa and exponent without using an non-standard cheats?

"I can't believe I'm defending logic to a turing machine." - Kent Woolworth [Other Space]