Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


Rattrap

Member Since 03 Nov 2004
Online Last Active Today, 06:10 PM

#5190926 What can I improve in this code

Posted by Rattrap on 03 November 2014 - 10:11 AM

Another example on why to use nullptr over 0 (found on stackoverflow).

void f(char const *ptr);
void f(int v);
 
f(0);  //which function will be called?
f(nullptr);  // Well defined which is called.



#5190924 What can I improve in this code

Posted by Rattrap on 03 November 2014 - 10:02 AM


From what I read on msdn, I didn't quite get what are the advantages to be honest, can you explain it in a simple fashion(with an example)?

 

In the case of using 0, you are assigning an integer to a pointer.  In the case of nullptr, you are actually assigning a const pointer type to the pointer.  It also aids in readability since it better documents that this is a pointer being assigned a null value.

 

[edit]

It also can aid in catching possible typo errors, like if you accidently dereferenced the pointer.

 

*Pointer = 0; // might be ok depending on Pointer's type and any implicit conversions that might exist
*Pointer = nullptr; // error (unless it is a pointer to a pointer).



#5190903 What can I improve in this code

Posted by Rattrap on 03 November 2014 - 08:03 AM

These are more nitpicks than optimizations.
 
If you are using a C++11 compiler, when initializing your pointers to nothing, you should use nullptr instead of 0.
 

Branch()
:parent(nullptr), x(0), y(0)
{
}

 
Also, member functions that don't modify anything can be declared const. Doesn't really offer any improvement performance-wise, but does convey that the function shouldn't be changing anything internally and can sometimes help the compiler catch bugs. One example...

Branch* findRightmost() const
{
if (children.empty())
return this;
children.back()->findRightmost();
}



#5174787 So... C++14 is done :O

Posted by Rattrap on 19 August 2014 - 12:04 PM

The language shouldn't know what begin and end are.


The language doesn't, which is why std::begin and std::end must be overloaded for a particular type. So it is fairly safe to assume whomever overloaded std::begin and std::end return whatever they consider the beginning and end of the range.


#5174704 So... C++14 is done :O

Posted by Rattrap on 19 August 2014 - 06:17 AM

The constexpr system just complicates what was simple and could already be done by leaning on the preprocessor.


I personally like constepxr. I mess around with a lot of template meta-programming, and it (especially the improved C++14) let me do some compile time value generation much easier and cleaner than using enums or constants in structs. Plus it adds in type-safety, which pretty much is pretty much non-existent if you use macros.


#5165574 Visual Studio 2013 Express Desktop Intellisense stopped working

Posted by Rattrap on 08 July 2014 - 10:57 AM

I don't have Express (using Pro) but when Intellisense stops working, typically you can do Rescan Solution under the Project menu. This usually gets it going again. Haven't had to do the delete the database trick since 2010 iirc.


#5156585 What are constant references used for?

Posted by Rattrap on 28 May 2014 - 05:40 PM

Would it refer to anything? I mean, after the execution of the function, local variables with automatic block duration would be wiped out of memory, hence the reference returned by such hypothetical function would refer to garbage.

Local variables, yes. But member variables of a class that hasn't gone out of scope would persist.

Look at std::vector, the const versions of the [] operator or the at function return a const reference to the requested index.


#5153755 Karatsuba algorithm

Posted by Rattrap on 15 May 2014 - 06:15 AM

Thanks.  This has been one of those pet projects of mine that has been written and rewritten several times (bigint not Karatsuba).  I've been playing with implementing different cryptographic functions and needed it to get RSA to work.

 

Based on Wikipedia, it was saying in the 300-600 bit range should be where it beats the long multiplication method.  My default multiuplication uses the shift and add method instead of long multiplication, which tends to be faster than long multiplication as well.  So my base comparison is a little skewed to begin with.

 

I've been comparing runtimes between my Karatsuba and my shift and add based multiplication and currently Karatsuba is way slower using QueryPerformanceCounter as the metric.  Doing a test of 50 multiplies for each algorithm, even at the 25000 bit range (I was really looking for the point Karatsuba would overtake the other), it was still twice as slow.  The shift and add version only needs to two temporary vectors. My current Karatsuba implementation currently has to allocate 4 temporary vectors for each level of recursion, so I think the memory allocations are killing it.  The two vectors in shift and add grow so there are multiple allocations there too, but no where near as bad.  Plus the needed space can be pre-allocated.  I'm going to try allocating a large pool before starting the recursion and passing iterators through to see if that will help speed it up.




#5153618 Karatsuba algorithm

Posted by Rattrap on 14 May 2014 - 11:56 AM

There was a link at the bottom of the Wikipedia page for a c++ example that worked very similar to mine.  After rigging up some debugging code, I found that the issue involved the splitting and the shifting.  I added 1 before dividing the size by 2 in order to make the lower half bigger.  I assumed this would be the place to split it and trailing zeros in the upper half would "balance" it out.  Based on the other code, it looks like you want the higher order to be the larger of the two if it doesn't split perfectly.  This then caused me to use to large of a shift for Z2, since it should have been shifting by twice the "half".  I assumed that twice the half would always be the longest length, which isn't true when the number is odd.




#5148277 Tips on FileLoader code

Posted by Rattrap on 19 April 2014 - 09:33 PM

//FileLoader defs
#define FileLoaderUnknown 0
#define FileLoaderComplete 1
#define FileLoaderInTransfer 2
#define FileLoaderErrorInvalidHandle 10
#define FileLoaderErrorInvalidSize 11
#define FileLoaderErrorX 12
#define FileLoaderErrorReadSize 13
#define FileLoaderError4 14
#define FileLoaderError5 15


First suggestion would be to convert these to an enum, instead of processor macros.

// C++11 enum, doesn't pollute the entire namespaceenum
class FileLoaderCodes
{
	Unknown,
	Complete,
	InTransfer,
	ErrorInvalidHandle,
	ErrorInvalidSize,
	ErrorX,
	ErrorReadSize,
	Error4,
	Error5
};
// The new GetStatus
FileLoaderCodes GetStatus();
// The member variable
FileLoaderCodes Status;
// In code example
Status = FileLoaderCodes::Unknown;

This version is type safe and you can add more codes without needing to mess with what the numeric value actually is. There is no chance of accidental matching values as well.




#5146786 Paul Nettle's memory tracker mmgr and C++11

Posted by Rattrap on 13 April 2014 - 08:01 PM

Following up on what I mentioned earlier. This is from a draft of C++11 standard, so I can't guarantee this is what the final version said, since I don't have a copy available.

17.6.4.3.1
Macro names
[macro.names]
2 A translation unit shall not #define or #undef names lexically identical to keywords, to the identifiers listed in Table3,or to the attribute tokens described in 7.6.




#5146707 Paul Nettle's memory tracker mmgr and C++11

Posted by Rattrap on 13 April 2014 - 09:11 AM

Wasn't it pretty damn stupid to choose "delete" for deleting constructors when it is already a keyword for freeing memory?


I don't think it was really stupid. Relying on a macro override of a keyword isn't real safe to begin with. I believe the standard says you shouldn't do it. In this case, the keyword is now context sensitive. The pre-processor is incapable of determining the context.


#5146293 Inline for clone() function

Posted by Rattrap on 11 April 2014 - 09:23 AM

There is nothing inherently wrong with it, but returning a raw pointer can lead to memory leaks.  Would be safer to return a unique_ptr or shared_ptr (assuming C++).




#5145982 What does the standard say about this?

Posted by Rattrap on 10 April 2014 - 09:58 AM

I had implemented several hash functions in C++ a while back, and I was going back through them to clean them up a little  In the example below (MD5), I converted the input bytes (plus appropriate padding) to an array of 16 elements to be processed by the actual hash algorithm.  The FromLittleEndian function reads the number of bytes required to fill the first template value from the input (pointer to element in MessageBlock) and returns the result in the native endian format (in this case std::uint32_t).

 

typedef std::array<std::uint32_t, 16> messagechunk_type; 
const messagechunk_type ProcessBlock =

{

    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[0]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[4]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[8]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[12]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[16]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[20]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[24]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[28]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[32]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[36]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[40]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[44]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[48]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[52]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[56]),
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[60])

};

 

I haven't tested the code below to see if the result is the same, but I was more curious if the behavior is well defined by the standard.  I know that the use of commas like this can produce unexpected results in other cases.

 

typedef std::array<std::uint32_t, 16> messagechunk_type;
int Index = 0;
const messagechunk_type ProcessBlock =

{

    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index]), //0 - Thanks Alpha_ProgDes
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //4
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //8
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //12
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //16
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //20
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //24
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //28
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //32
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //36
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //40
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //44
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //48
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //52
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index += sizeof(std::uint32_t)]), //56
    FromLittleEndian<std::uint32_t, CharArray::value_type>(&MessageBlock[Index]) //60

};

 

Also, which would you consider more readable (assuming the result is correct in the first place).  I'm conflicted since I consider the original version more readable, but something about having the index numbers hard coded bugs me.  Or should I just replace this with a for loop and not try to initialize the array like this?  I was trying to avoid the double initialization, since I believe that std::array will initialize all of the values to 0 (please correct me if I'm wrong here).




#5138490 Industrial Strength Hash Table

Posted by Rattrap on 12 March 2014 - 01:19 PM

#include "Includes.h"

 

Got to love the all encompassing include.


Method names begin with a capital letter

This one is excusable, since that is just personal preference.  I personally don't like camelCase.






PARTNERS