Just glimpsed over it:
0) Code C++, not C/C++
You are using #define macros, which should be avoided in c++ except for very rare cases. Most of what those macros where used for in C can now be done better with inheritance and templates.
1) Security
You are assuming that an implementing class puts your macro inside public visibilty. It could be that the macro hence exposes protected and private members of the implementing class (and users of intellisense
will use what they see in the intellisense-popup (not microsoft specific)).
2) Fight the optimiser
Your benchmark is bogus:
// my method time = GetSystemTime(); for (unsigned int i = 0; i < 10000; i++) { B *a = new B; // << alloc delete a; // << dealloc } std::cout << (GetSystemTime() - time) << '\n';
Compilers are perfectly allowed to remove the whole loop, as it has no side effects, except the constructor or destructor do something that affects a bigger scope then the loop. It is generally quite hard to write benchmarks. A good start is to make the constructor somehow dependent on unpredictable data (for example user input),
and to ensure the output is dependent on the processed input (enforce that the measured data participates in all stages of the IPO model).
3) Unreliable results
Point 3) is really an addendum to point 2). I see you have calls to std::cout<< in B::B() and B::~B(). While this would be a 99% guarantee that your loop will not be optimized away, it is not a good idea to read/write from/to standard i/o:
- output buffering occurs, and every now and then (e.g. every 8192 chars) it will be flushed, which takes up processing time and makes measurement more or less unreliable
- you interface with the operating system, which is always a (with respect to time) unpredictable process
4) Compare to boost::intrusive_ptr
You are comparing to boost::shared_ptr what really should be compared to boost::intrusive_ptr.
Appendix A)
Hint: If you are on Linux, you can get more reliable benchmark results by starting your benchmark as the only user process, by passing init=<path-to-your-benchmark> as an argument to the kernel (actually, if you're benchmark is stable (i.e. without std::cout or similar in the measure loop), you can get results that only vary in fractions of percents from run to run).
Appendix B)
For serious measurements, always check what your compiler spits out, i.e. have a look at the assembly, e.g. to make sure certain optimisations do not influence the results.
Further Reading on Optimisation
No offense, but you asked for it ;)
edit: Appendix B) added
edit2: Added optimisation links
[Edited by - phresnel on January 9, 2009 8:41:45 AM]