Code Optimization: Why?

Started by
6 comments, last by Lubb 23 years, 10 months ago
- I''m not asking why you''d optimize code for space or speed requirements, and I''m not questioning that it really does what it says (as far as VC++ goes, anyway). -And I understand the debug version scatters features through it to help track errors, so I can easily understand why that would take some extra space even though I don''t know how that''s done. My question is, why can you optimize it different ways? If I have a simple DOS console that prints the text "Whatever" to the screen and I compile it normally and get a 220 kb EXE, and then I compile it for speed and get a 240 kb EXE, and then I compile it for space and get a 180 kb EXE, what is the smaller file missing? What extra does the larger file have? It would seem to me that you''d (-we''d all!-) be better off if there was only one correct way to compile any source file: one way that includes all the information given in the source file, and nothing extra. This would seem to be the fastest code, and the most compact. How can the compiler derive different EXE sizes from the same source file?
RPD=Role-Playing-Dialogue. It's not a game,it never was. Deal with it.
Advertisement
...
Because.

Know what a function call is? It is possible to remove some function calls by using inline asm code, which makes the final exe larger because there is actually more commands stored in it. The smaller one stores one copy of repeated code & hence makes more function calls. Is this case, bigger~faster, smaller~slower.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Each C/C++ (or any high level language) command, statement, or expression generates multiple assembly language commands (except maybe for simple mathematical operations like increments). Also there are function calls, inline functions, and a bunch of different shortcuts that can be taken depending on how your code is organized.

If the code is optomized for speed, then it will make more calls inline, and produce larger code so that it makes less function calls. If it''s compiled for size then it will make more function calls (hence it will run slower) but there will be more code reuse.

However you compile it, all of your code (that is actually called) will get put into the exe file.
Lubb: I think the basic idea behind having all of these separate compiler optimization types is to eliminate some of the more tedious hand optimization that many programmers need to do. All of the simple stuff (inline functions, struct packing, etc.) is dependent on a few mathematic equations to determine the optimal memory usage/cycle usage ratio, and so it is a perfect thing for the computer to do for you. You just have to select an option in the project rather than going through and changing function definitions, #pragma(s) and what-have-you.

In other words, you create one set of source files, and pass the compiler hints (like the inline keyword, throw() exception specification, etc.) about how it can be optimized. Once you have created one set of source files, the compiler can optimize the generated EXE in many different ways, just by choosing a different option in the project settings.




- null_pointer
Sabre Multimedia
people, you forgot to mention to Lubb what inlining is or not detailed enough

ok what inlining basically does is to act like a macro.
Lets say you have a function called X(), each time you call that function it takes CPU time. Anyway what inlining does is take the contents of function X() and shoves it right where your calling it and it does this everytime you call the inlined function. It saves time by avoiding a jump to your function. Thus speeding up your code at the price of a bigger file.
Just checking over my previous post that wasn''t any good either
Maybe better with an example,
if you want an example Lubb just say so.
It isn't so much that 'normal' compilation leaves a lot of useless stuff in there (although there is some redundancy), it's just that optimization makes some decisions on how best to cut the corners.

As a contrived example, on many machines this code:

i=-1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
array[++i] = 1;
... repeat another 90 times or so ...

will be faster than:

for (int i=0; i < 100; ++i) array = 1;<br><br>The first example would give you a larger yet faster executable, the second example would give you a slower yet smaller executable. It's a tradeoff.<br><br>Of course, it's not always that simple, as sometimes larger code will cause cache flushing or whatever, so <b>always profile before you optimize </b> <img src="smile.gif" width=15 height=15 align=middle><br><br>Edited by - Kylotan on June 26, 2000 11:18:54 AM
Many libraries do things like this:

        //...#ifdef _DEBUG    // Defined if compiling in debug mode     DoDebugStuff();#endif//...        


So if you're not compiling in debug mode some stuff just gets totally ignored and doesn't even get into the program. That saves a lot of space, I would imagine.

lntakitopi@aol.com / http://geocities.com/guanajam/

Edited by - SHilbert on June 26, 2000 9:16:22 PM

This topic is closed to new replies.

Advertisement