GCC v VC++ and C++ v inline assembly.

Started by
22 comments, last by Hodgman 15 years, 5 months ago
Quote:Original post by Promit
(Just to clarify, it's awfully tempting to delete any further posts that do that.)
While you're at it, can we delete all of those annoying threads where people pronounce it Lin-ux instead of Li-nux?
Advertisement
Yeah, gotta strangle those Line-icks people!
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
Quote:Original post by Splinter of Chaos
I generally work with all optimizations on.

Hmm, apparently not. In my test above, I compiled a standard release build under Visual Studio, and the GCC version was compiled with -O3 and nothing else. In both cases, it gave me 0 seconds. If you didn't get that value, it would seem like you compiled without optimizations. (Or you used an ancient version of both compilers)

Quote:Original post by Hodgman
Quote:Original post by Promit
(Just to clarify, it's awfully tempting to delete any further posts that do that.)
While you're at it, can we delete all of those annoying threads where people pronounce it Lin-ux instead of Li-nux?


I pronounce those two exactly the same. What's the difference?
Oberon_Command, I believe he was pointing out that spelling something differently does not constitute post deletion by pointing out how silly the issue is.

Spoonbender, as we're learning today, if you get zero seconds, it's because your compiler skipped it all together. Try making the function add to a sum the whole way. I didn't find printing the sum made a hell of a difference, but I do it, anyway.

I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)
Quote:Original post by Splinter of Chaos
Spoonbender, as we're learning today, if you get zero seconds, it's because your compiler skipped it all together. Try making the function add to a sum the whole way. I didn't find printing the sum made a hell of a difference, but I do it, anyway.
I believe he was pointing out that your code can't possibly have been run with optimisations on.

Quote:Original post by Splinter of Chaos
I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)
The compilers job is to catch things like that without you having to tell the compiler it's OK. When you turn optimisations on, you're saying to the compiler "I want this code to run as fast as possible". The fastest code is the code that never executes, which is why the compiler will obliterate code that doesn't do anything. It's not disrespecting you any more than re-ordering lines of code to get better register and cache usage (Which most compilers will also do).
Throwing up a warning would be one option I suppose, but the compiler is just doing its job; telling you about it would quickly become annoying.
Quote:
I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)


What is so obviously intentional about it?
If you want to delay the program, you call Sleep() or similar. Performing millions of arithmetic operations and then discarding the result does not signal an intent to delay the program.
It might just as well indicate 1) that the programmer left in some extra code to aid readability, 2) that the programmer just copy/pasted the code from a larger sample which *does* use the result, 3) that the programmer hadn't thought the code properly through or 4), it could be the output from some code generator program which simply glued together different snippets of code, and relied on the compiler to make it go fast. In all these cases, the correct course of action for the compiler is to eliminate the code. We don't need it.

All the compiler sees here is that you perform a lot of work which isn't strictly necessary.

Anyway, it's explicitly allowed in the language standard.
It specifically mentions the "as if" rule. The compiler just has to generate something that behaves *as if* the language spec had been followed. Performing a bunch of computations that you never use yields exactly the same results as if you'd never performed these computations at all, so it's legal to remove them.

Every optimization the compiler can do involves changing the program somehow. Should it warn about everything? Reordering computations, inlining functions, constant propagation? A modern C++ compiler performs hundreds of program transformations to optimize the program. Every single one of them will speed the program up by deviating from the exact code you wrote. Every single one of them will skew the results of your clock() calls, because less code will be performed before it. After all, that's the entire point in an optimization. It makes things go faster than they would otherwise. By enabling optimizations, you say "I know this may speed up my program, and I'm ok with that".
And that is precisely what the compiler did. Why should the compiler warn you that it does exactly what you asked it to do?
Quote:Original post by Splinter of Chaos
I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)
If you put them in your code on purpose, then drop in flags to tell the compiler not to optimize. In a general case though, there is no such thing as a compiler optimization pass that does not change something. Also GCC has a -O0 for people who want what you [for some reason] want. So perhaps there is a bigger question here, which is why do you want it?

Also, define 'obviously intentional' in an absolute way that us compiler writers can use in our work. Compilers cannot read minds... And if you want a warning every time a compiler is going to change something, even primitive compilers would bury you in warnings on compilation of all but the simplest programs.

Also, compilers cannot disrespect you either...
Quote:Original post by Splinter of Chaos
I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)


It's not even remotely that easy.

The source code as you see it gets transformed into either straight assembly or some intermediate form (be it AST, bytecode, native assembly, or something else). That form loses all syntactic sugar and is just a matter of performing trivial operations on set of registers and memory locations.

An optimizing compiler then looks at that and searches for patterns. One of them Idempotence.

The unique thing about C++ compiler is the degree to which it may perform such optimizations. The easiest way to see this is to examine boost. Between all the templates and specialization, any trivial use of boost classes results in potentially millions of conceptual classes and functions. Yet, after basic optimization pass, compiler is capable of trivially collapsing that into a few constant expressions.

And this type of transformation doesn't even include operation re-ordering, compile-time evaluation, (N)RVO and similar.


But pray tell: why benchmark performance of assembly vs. a compiler, then complain optimizing compiler is capable of optimization?

This is why the claim of compilers getting better than humans has taken hold. Too many people look at code line by line and optimize it, when they should be looking at big picture.

Ironically, the task ill-suited for machines seems to often cause more problems to humans than a machine.

Side note: the 'M$' in this whole context is incredibly ironic.
Quote:Original post by Splinter of Chaos
I wonder why the compilers say "Hey, this line does nothing! I should disrespect the programmer by skipping it." I'd rather have a warning. I put inefficiencies into code for a reason, and I expect optimizing compilers to optimize what is obviously not intentional. (And this is so obviously intentional.)


Compilers have no concept of "obvious". They don't have intuition. The closest they can get to intuition is this fact: In normal programming, you never intentionally add inefficiencies (except as a necessary cost of other benifits, and even then, it's viewed as just that -- a cost, not a boon). In other words, your compiler thinks it's obviously unintentional -- and with good reason (especially given that you can turn optimizers off).

Even when profiling in normal programming, you're largely interested in where you can help optimize that the compiler is unable to -- so you want the compiler to still be optimizing as much as it can.

Besides, benchmarks are largely useless. Given how wildly the performance of code can vary, depending on the context in which it's run, the only way to get a real, accurate representation of the performance of some code is to profile it in the context of a program using it.

If you're going to do benchmarks regardless, well, you're learning one of those things you just have to keep in mind: Without an optimizer your results are worthless (unrealistically slow), and with an optimizer you must take special care to, again, avoid your results being worthless (unrealistically fast).

Profile instead ;-)

This topic is closed to new replies.

Advertisement