Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualsamoth

Posted 28 December 2012 - 01:01 PM

What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched C compilers.

I'd prase it even more extreme:
What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as better than experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched outperform C compilers.

Neither assembler programmers nor C programmers will like this, I'm sure. But the stunning truth is that that's just what has observably happened. Take for example this statement from the Nedtrie home page:

... there are competing implementations of bitwise tries. TommyDS contains one, also an in-place implementation, which appears to perform well. Note that the benchmarks he has there compare the C version of nedtries which is about 5-15% slower than the C++-through-the-C-macros version

This sounds like total bollocks, but it is just what it is, go ahead and try for yourself.

For some obscure reason (stricter aliasing rules? RVO? move semantics? ...whatever?) a C++ wrapper around unmodified C code is sometimes not "just as fast" but faster than the original C code.

The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.


#1samoth

Posted 28 December 2012 - 01:01 PM

What changed is that C compilers got smarter and smarter, to the point where they can now write assembly as well as experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they matched C compilers.



I'd prase it even more extreme:

What changed is that C compilers got smarter and smarter, to the point where they can now write assembly <strike>as well as</strike> better than experts can (when given sensible inputs). Likewise, C++ compilers got smarter and smarter, to the point where they <strike>matched</strike> outperform C compilers.

Neither assembler programmers nor C programmers will like this, I'm sure. But the stunning truth is that that's just what has observably happened. Take for example this statement from the Nedtrie home page:

... there are competing implementations of bitwise tries. TommyDS contains one, also an in-place implementation, which appears to perform well. Note that the benchmarks he has there compare the C version of nedtries which is about 5-15% slower than the C++-through-the-C-macros version


This sounds like total bollocks, but it is just what it is, go ahead and try for yourself.

For some obscure reason (stricter aliasing rules? RVO? move semantics? ...whatever?) a C++ wrapper around unmodified C code is sometimes not "just as fast" but faster than the original C code.

The same is true for C/C++ versus hand-written assembler code. I've been writing assembler since 1983, but I regularly find it hard to match (match, not outperform!) optimized C++ code nowadays. If anything, I use intrinsic functions now, but writing "real" assembler code is a total waste of time and an anti-optimization for all I can tell. The compiler does at least as good, and usually better. You may be able to work out an example where you gain a few cycles over the compiler, but on the average you'll be totally anihilated.


PARTNERS