I've been implementing my own big integer class in C++. I've been trying to implement the Karatsuba algorithm as an alternate method of doing multiplication once the bit length reaches a certain threshold. My current implementation seems to get the first 3 or 4 elements correct, but after that the values are incorrect.
I've tried reproducing the algorithm using C# and .Net's BigInteger, and I've been getting the same results. So, I believe that my actual math functions are correct (Multiply, Add, Subtract, LeftShift), I'm guessing I'm doing something incorrectly in the algorithm itself. I'm hoping some other sets of eyes might see something I missed.
My big integer class stores binary values in vectors of unsigned integers, the lowest order bits at the beginning of the array. All values are stored in the positive, along with a signed bit. Since the signs don't actually affect multiplication, the function is not concerned about them.
typedef std::vector<std::uint32_t> storage_t
inline static void AlternateMultiplication(const storage_t& Lhs, const storage_t& Rhs, storage_t& Product)
/*Get the size of each input*/
const auto LhsSize(Lhs.size());
const auto RhsSize(Rhs.size());
/*If one of the inputs is a single element long, use regular multiplication*/
if((LhsSize <= 1) || (RhsSize <= 1))
/* Multiplication ignoring the signs. First two parameters are inputs by const reference, 3rd parameter is the output by reference */
AbsoluteMultiplication(Lhs, Rhs, Product);
/*Find the length of the longest input*/
const auto LongestSize(std::max(LhsSize, RhsSize));
/*Find half of the longest input. The +1 helps it round up*/
const auto HalfLongestSize((LongestSize + 1) / 2);
/*Hold over from messing with the algorithm at the moment*/
const auto HalfLongestSizeSafe(HalfLongestSize);
/*Vector's to copy the split values*/
/*Iterators marking where the splits will occur*/
const auto LhsBegin(std::begin(Lhs));
const auto LhsEnd(std::end(Lhs));
const auto LhsMid(LhsSize >= HalfLongestSize ? LhsBegin + HalfLongestSize : LhsEnd);
const auto RhsBegin(std::begin(Rhs));
const auto RhsEnd(std::end(Rhs));
const auto RhsMid(RhsSize >= HalfLongestSize ? RhsBegin + HalfLongestSize : RhsEnd);
/*Split the inputs*/
storage_t LhsLow(LhsBegin, LhsMid);
storage_t LhsHigh(LhsMid, LhsEnd);
storage_t RhsLow(RhsBegin, RhsMid);
storage_t RhsHigh(RhsMid, RhsEnd);
AlternateMultiplication(LhsLow, RhsLow, Z0);
AlternateMultiplication(LhsHigh, RhsHigh, Z2);
AbsoluteAddition(LhsHigh, LhsLow, LhsHigh);
AbsoluteAddition(RhsHigh, RhsLow, RhsHigh);
AlternateMultiplication(LhsHigh, RhsHigh, Z1);
/*Just to be safe, tracking the sign of Z1, since subtraction is involved*/
Subtraction(Z1Sign, Z1, true, Z2, Z1Sign, Z1);
Subtraction(Z1Sign, Z1, true, Z0, Z1Sign, Z1);
/*Left shift doesn't affect the sign. Convert elements length of number of bits to shift*/
AbsoluteLeftShift(Z1, (HalfLongestSize * (sizeof(T) * CHAR_BIT)), Z1);
AbsoluteLeftShift(Z2, (LongestSize * (sizeof(T) * CHAR_BIT)), Z2);
Addition(Z2Sign, Z2, Z1Sign, Z1, Z2Sign, Z2);
Addition(Z2Sign, Z2, true, Z0, ProductSign, Product);
In the pseudocode on Wikipedia, the example is in base 10, where my code is in binary. If I had to guess, the issue is in the left shifting or that I have to keep breaking down a single element down into a single bytes or smaller, since this is base 2 math.
As you can see, the first 3 elements are correct, as well as all but the highest order byte in the 4th. This is similar if the inputs get bigger. If the inputs are smaller, the answer comes out correct.
I'm trying to avoid posting the entire big integer class, since it is massive. Any insight or suggestions would be helpful.
I had implemented several hash functions in C++ a while back, and I was going back through them to clean them up a little In the example below (MD5), I converted the input bytes (plus appropriate padding) to an array of 16 elements to be processed by the actual hash algorithm. The FromLittleEndian function reads the number of bytes required to fill the first template value from the input (pointer to element in MessageBlock) and returns the result in the native endian format (in this case std::uint32_t).
I haven't tested the code below to see if the result is the same, but I was more curious if the behavior is well defined by the standard. I know that the use of commas like this can produce unexpected results in other cases.
Also, which would you consider more readable (assuming the result is correct in the first place). I'm conflicted since I consider the original version more readable, but something about having the index numbers hard coded bugs me. Or should I just replace this with a for loop and not try to initialize the array like this? I was trying to avoid the double initialization, since I believe that std::array will initialize all of the values to 0 (please correct me if I'm wrong here).
I'm posting this in the General Programming Forums, but it may really belong in the Math Forum...
While working on a project involving reading GIS data from a file, I somehow got myself side-tracked onto writing some functions that convert between the native floating points values to their binary IEEE754 equivalent (without taking the floating point variables pointer and casting it to a matching size integer pointer).
So I started researching the conversion process, and found this website with a procedure.
My code starts by separating the integral and fractional parts using the C/C++ modf function.
* FloatType is a template param = Input Type
* UnsignedIntegerType is a template param = Output Type
* MantissaBitCount is a template param stating the number of bits to use to store the mantissa in the output
* ExponentBitCount is a template param stating the number of bits to use to store the exponent in the output
* FindMostSignificantBit is a function that returns the highest flagged significant bit in an integer.
* MantissaPivot will eventually be translated to the exponent
* Unlike the example I am multiplying by 256 (instead of 2), allowing for 8 bits to be extracted after each iteration.
const FloatType Zero(0);
const FloatType One(1);
const FloatType Factor(256);
const UnsignedIntegerType FactorSize(8);
/* Used to test for infinity */
const bool GreaterThanZero(Input > Zero);
/* Used to test for overflow */
const bool GreaterThanOne(Input > One);
Mantissa <<= FactorSize;
const FloatType Fractional(std::modf(Input, &Integral));
/* This fails with large Integrals */
const UnsignedIntegerType ToMantissa(static_cast<UnsignedIntegerType>(Integral));
if(MantissaSize == -1)
if((ToMantissa == 0) && (GreaterThanOne == true))
/* Number was too large to cast to an int */
/* Initial Length */
const int MostSignificantBit(FindMostSignificantBit(ToMantissa));
MantissaSize = MostSignificantBit;
/* Number of bits before most significant bit */
MantissaPivot = (MostSignificantBit > 0) ? (MostSignificantBit - 1) : 0;
/* I don't know what to call this other than non-initial */
if((Mantissa == 0) && (ToMantissa != 0))
const int MostSignificantBit(FindMostSignificantBit(ToMantissa));
* Number of bits before most significant bit
* At this point, we are dealing with the decmal and a negative
* number. The means we start counting from the highest possible
* bit count the number of bits between it and the most siginifcant
* bit, including the most signficant bit itself.
MantissaPivot -= (static_cast<int>(FactorSize) - MostSignificantBit + 1);
* Don't count the leading 0's, since there was no significant values
* stored in the mantissa yet.
MantissaSize += MostSignificantBit;
if(Mantissa == 0)
MantissaPivot -= FactorSize;
MantissaSize += FactorSize;
Mantissa |= ToMantissa;
Input = Fractional * Factor;
while((MantissaSize < MantissaBitCount) && (Input > Zero));
// More code follows for constructing the binary after it has been parsed into the mantissa and exponent
As on the example website, I try taking the Integral part and convert it to a native integer. The problem I'm having is if Integral part of the float is larger than what can be stored in a the output integer. Casting to a larger type is not really an option, since the code supports doubles and 64-bit integers. Any ideas on how to convert these large values into a usable mantissa and exponent without using an non-standard cheats?
I've been building my own C++ win32 wrapper. I wanted to start adding a Direct3D9 control to my library, so I started going through the D3D9 tutorials in the SDK. I did things slightly different from the example, creating a parent form first, then creating a child window within it that. The child window is what IDirect3DDevice9 is associated with. I included the following code in the message loop after the windows had been created.
// Error checking code removed for brevity. All are successful.
Direct3DDevice9->Clear( 0, 0, D3DCLEAR_TARGET, D3DCOLOR_XRGB(255,0,0), 1.0f, 0 );
Direct3DDevice9->Present(0, 0, 0, 0);
The desired effect would be the child window would appear red, but I wasn't seeing anything. Everything had been initialized successfully. After some fiddling around, I discovered the culprit. My parent form uses the WS_EX_COMPOSITED extended window style. When i removed this, my red box shows up. Does anyone know if there is a way to get this to work, while keeping WS_EX_COMPOSITED on the parent form?