Archived

This topic is now archived and is closed to further replies.

Galileo430

C/C++ Optimization

Recommended Posts

Well, if you go to:
http://www.ddj.com/articles/2001/0165/0165f/0165f.htm
you can find the complete text of Michael Abrash''s Graphics Programming Black Book, which is a good reference on optimization techniques in general. There''s some stuff there on C/C++.

Good luck!

Adiss
a.k.a. Magic Card

Share this post


Link to post
Share on other sites
This is the wrong forum for this, but its relevent for the thread. I wrote myself a little lesson to help in studying for one of my exams on algorithm analysis. I think theres a tutorial in there struggling to come out, but I''m not sure if anyone would find it useful. It''s not much in the way of nuts and bolts optimization (unroll a loop here, inline there, etc) but more in deciding:

1) Does this algorithm i''m using suck
2) Is there something better.
3) Will this algorithm perform the way I need it to.

Things like that. It''s pretty theory-heavy, but that goes with the territory. It *is* relevent to actual programming practice, however.

Share this post


Link to post
Share on other sites
Hi,

it is hard to give common tips for that since some
behavoiur heavily depends on your compiler implementation.
If you do tests for example on a windows box using MSVC
your algorithm runs OK but on another architecture like
IBM-AIX it sucks due to the fact that their compiler works
different. There are some major rules like do not use to deep inheritance trees and avoid creating to many temporary objects. But there are also some good books out which give you general ideas of what to to and what to avoid. I think there is a book from a guy called Scott Meyers which is a really good one. But i do not remember the correct spelling of his last name or the title by now.


cu

Peter

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
That''s Scott Meyers "More Effective C++" 1st & 2nd editions.
Very good books. Simple. Efficient.

Share this post


Link to post
Share on other sites
quote:
Original post by phemmer



Hi,



it is hard to give common tips for that since some

behavoiur heavily depends on your compiler implementation.

If you do tests for example on a windows box using MSVC

your algorithm runs OK but on another architecture like

IBM-AIX it sucks due to the fact that their compiler works

different. There are some major rules like do not use to deep inheritance trees and avoid creating to many temporary objects. But there are also some good books out which give you general ideas of what to to and what to avoid. I think there is a book from a guy called Scott Meyers which is a really good one. But i do not remember the correct spelling of his last name or the title by now.





cu



Peter



Those things matter, but not in choosing an algorithm. The whole point of algorithm analysis is to present them in a manner that is independent of these things. Don''t believe me? We''ll each sort a million numbers. You use insertion sort, and you can write it in assembly and run it on the fastest PC around. I''ll use mergesort and run it in basic on a 386. I''ll beat you every time. In fact, I''ll run circles around you. I''ll let you run it on a powerful mainframe and I''ll still win. Knowing your compiler and avoiding temporaries and stuff like that is important, don''t get me wrong..but that stuff is amplified by the algorithms you use. Someone above posted a link to Michael Ambrash''s book, and to paraphrase him, if your algorithm is inherently flawed you can optimize it all you want, you''ll still end up with "really fast slow code".


Share this post


Link to post
Share on other sites
Effective C++ and More Effective C++ are 2 great books and this site is great :

http://www.gotw.ca/gotw/index.htm

Don''t forget to read the conversations.

Share this post


Link to post
Share on other sites
Blit,

I would have to argue that algorithm analysis is not independant of "these" things. You are not trying to minimize the number of operations performed, but rather minimize the consumption of some resource. As an example floating point operations cost more than integer operations and multication/division costs more than addition/subtraction. An algorithm like Bresenham''s line drawing algorithm is based entirely upon that concept. The relative costs of those things depends entirely upon "these" things. If you ignore "these" things then two algorithms seem equal when they are not or one seems better when it is actually worse.

Share this post


Link to post
Share on other sites
LilBudWizer,

But you *are* trying to minimize the operations performed, as well as minimize the cost of each operation. Why use a routine that checks each element in a list until I find the one I want when I can use one that lets me throw away half the list as irrelevent each time i do a test? You can optimize that search til you''re blue in the face and you''re still wasting much, much more time than you should be.

I said that those optimizations are in fact, important, but in many cases they''re not as important as using the correct algorithm in the first place. With the case of bresenhams, since I find it doubtful that I can plot a line with 300 pixels without drawing 300 pixels, I think it''s pretty much as close to optimal as you can get, so the coversion to integer over floating point operations is a big deal.

With a standard line plotting routine, you''re doing several floating point operations per pixel. You have only two options to make it faster: do less work on each pixel (ie. move to integer math) or plot fewer pixels, which isn''t really an option in this particular case. It *is* an option in many other cases. So, for line plotting, you are correct: you need to speed up the work you''re doing *per operation*. Why do so many computer games use an A* path searching algorithm? Because it lets the computer predict which paths are likely to be longer than the alternatives and throw them away without even computing them. For applications like this, it''s absolutely crucial that you minimize the number of operations you''re performing, simply making each one faster just won''t cut the mustard.

Bresenhams algorithm is a linear comlexity algorithm, and my point is that if I could discover a logarithmic complexity algorithm for drawing lines I could use floating point operations and draw lines faster than bresenham''s integer routine. (And of course, moving *that* algorithm over to integer operations would speed it up even more).

Share this post


Link to post
Share on other sites
Blit,

My point exactly. It wasn''t that I didn''t think you understood it, but rather that I thought you glossed over it. It is one of those things people tend to take for granted once they have been doing it for awhile, but a common mistake starting out. I have worked in performance tuning for a long time professionally. Generally many people understand the general idea of algorithmic analysis, but they miss one little detail. All operations are not equal. With CPU it isn''t nearly as dramtic as it is with I/O. Executing a binary search against a huge file can be murder as you thrash the heads all over the disk drive.

Share this post


Link to post
Share on other sites
Agreed, but thrashing the drive is going to be even worse on a linear search for large data sets. DB records are often stored in B+ trees for a reason...

All I''m saying is people (and especially new programmers) need to take note of asymptotic running time as well, instead of just focusing on other optimization techniques. Plenty of people get frustrated when their code which they''ve optimized heavily still performs badly, because they''re doing something that''s inherently slow algorithm-wise. Not to mention that code that''s been hacked to take all sorts of shortcuts that aren''t entirely obvious gets excessively cryptic, making it impossible to maintain.

Share this post


Link to post
Share on other sites