From what I read on msdn, I didn't quite get what are the advantages to be honest, can you explain it in a simple fashion(with an example)?
In the case of using 0, you are assigning an integer to a pointer. In the case of nullptr, you are actually assigning a const pointer type to the pointer. It also aids in readability since it better documents that this is a pointer being assigned a null value.
It also can aid in catching possible typo errors, like if you accidently dereferenced the pointer.
*Pointer = 0; // might be ok depending on Pointer's type and any implicit conversions that might exist
*Pointer = nullptr; // error (unless it is a pointer to a pointer).
If you are using a C++11 compiler, when initializing your pointers to nothing, you should use nullptr instead of 0.
:parent(nullptr), x(0), y(0)
Also, member functions that don't modify anything can be declared const. Doesn't really offer any improvement performance-wise, but does convey that the function shouldn't be changing anything internally and can sometimes help the compiler catch bugs. One example...
Branch* findRightmost() const
The language shouldn't know what begin and end are.
The language doesn't, which is why std::begin and std::end must be overloaded for a particular type. So it is fairly safe to assume whomever overloaded std::begin and std::end return whatever they consider the beginning and end of the range.
The constexpr system just complicates what was simple and could already be done by leaning on the preprocessor.
I personally like constepxr. I mess around with a lot of template meta-programming, and it (especially the improved C++14) let me do some compile time value generation much easier and cleaner than using enums or constants in structs. Plus it adds in type-safety, which pretty much is pretty much non-existent if you use macros.
I don't have Express (using Pro) but when Intellisense stops working, typically you can do Rescan Solution under the Project menu. This usually gets it going again. Haven't had to do the delete the database trick since 2010 iirc.
Would it refer to anything? I mean, after the execution of the function, local variables with automatic block duration would be wiped out of memory, hence the reference returned by such hypothetical function would refer to garbage.
Local variables, yes. But member variables of a class that hasn't gone out of scope would persist.
Look at std::vector, the const versions of the  operator or the at function return a const reference to the requested index.
Thanks. This has been one of those pet projects of mine that has been written and rewritten several times (bigint not Karatsuba). I've been playing with implementing different cryptographic functions and needed it to get RSA to work.
Based on Wikipedia, it was saying in the 300-600 bit range should be where it beats the long multiplication method. My default multiuplication uses the shift and add method instead of long multiplication, which tends to be faster than long multiplication as well. So my base comparison is a little skewed to begin with.
I've been comparing runtimes between my Karatsuba and my shift and add based multiplication and currently Karatsuba is way slower using QueryPerformanceCounter as the metric. Doing a test of 50 multiplies for each algorithm, even at the 25000 bit range (I was really looking for the point Karatsuba would overtake the other), it was still twice as slow. The shift and add version only needs to two temporary vectors. My current Karatsuba implementation currently has to allocate 4 temporary vectors for each level of recursion, so I think the memory allocations are killing it. The two vectors in shift and add grow so there are multiple allocations there too, but no where near as bad. Plus the needed space can be pre-allocated. I'm going to try allocating a large pool before starting the recursion and passing iterators through to see if that will help speed it up.
There was a link at the bottom of the Wikipedia page for a c++ example that worked very similar to mine. After rigging up some debugging code, I found that the issue involved the splitting and the shifting. I added 1 before dividing the size by 2 in order to make the lower half bigger. I assumed this would be the place to split it and trailing zeros in the upper half would "balance" it out. Based on the other code, it looks like you want the higher order to be the larger of the two if it doesn't split perfectly. This then caused me to use to large of a shift for Z2, since it should have been shifting by twice the "half". I assumed that twice the half would always be the longest length, which isn't true when the number is odd.
Following up on what I mentioned earlier. This is from a draft of C++11 standard, so I can't guarantee this is what the final version said, since I don't have a copy available.
2 A translation unit shall not #define or #undef names lexically identical to keywords, to the identifiers listed in Table3,or to the attribute tokens described in 7.6.
Wasn't it pretty damn stupid to choose "delete" for deleting constructors when it is already a keyword for freeing memory?
I don't think it was really stupid. Relying on a macro override of a keyword isn't real safe to begin with. I believe the standard says you shouldn't do it. In this case, the keyword is now context sensitive. The pre-processor is incapable of determining the context.
I had implemented several hash functions in C++ a while back, and I was going back through them to clean them up a little In the example below (MD5), I converted the input bytes (plus appropriate padding) to an array of 16 elements to be processed by the actual hash algorithm. The FromLittleEndian function reads the number of bytes required to fill the first template value from the input (pointer to element in MessageBlock) and returns the result in the native endian format (in this case std::uint32_t).
I haven't tested the code below to see if the result is the same, but I was more curious if the behavior is well defined by the standard. I know that the use of commas like this can produce unexpected results in other cases.
Also, which would you consider more readable (assuming the result is correct in the first place). I'm conflicted since I consider the original version more readable, but something about having the index numbers hard coded bugs me. Or should I just replace this with a for loop and not try to initialize the array like this? I was trying to avoid the double initialization, since I believe that std::array will initialize all of the values to 0 (please correct me if I'm wrong here).