Sign in to follow this  

Suggestions for optimization level settings in library makefiles

This topic is 2546 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I've been considering my options as far as C++ encryption libraries lately. One that I've looked at uses -O3 and another -O2 both on Linux. And the bzip2 compression lib on Linux uses -O2. I use -O2 on both Linux and Windows in my library. I think -O2 on Linux is not full tilt optimization, but on Windows it is. What suggestions do you have regarding optimization settings for library code? Should I turn down the optimization on the Windows side in my lib? Thanks in advance.


Brian Wood
Ebenezer Enterprises
http://webEbenezer.net

Share this post


Link to post
Share on other sites
I'd suggest optimizing for size by default. Any performance critical code (which is probably only a handful of files) should use the best settings determined from profiling with various options.

That's because outside of tight loops small code usually performs well, because it tends to reduce cache misses. You may not notice this benefit with a small test program though.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Why would you change your optimizations on Windows based on a lack of convention for optimization levels on Linux?

I don't follow your reasoning...


I'm thinking that using O3 on Linux is overly aggressive (especially in an encryption library with it's heavy math component) and that to be more consistent I should maybe use O1 on Windows. I know that bugs that result from compiler optimization bugs are often difficult to trace and so I may do myself and users a favor by being conservative with the optimization. I think some might suggest to turn it down even further than O2 on Linux although I haven't found any examples of that yet.

Share this post


Link to post
Share on other sites
Quote:
Original post by Steve132
Optimization levels do not have to be homogeneous across the binary. You can compile different parts of your project with different optimization levels just fine.


Right, but I've heard in another forum a suggestion to turn the optimization off and only turn it on where needed and this will save you some grief. I believe after reading that I only partly took it to heart and turned my optimization down from O3 to O2 on Linux. I didn't make any change to the Windows side and now it seems to be more consistent I should turn the Windows optimization down to O1.

The encryption library that has the O3 setting is undergoing frequent changes and it seems to me like it would do the author well to at least turn it down to O2. I'm still considering using the O3 encryption library but have asked the author if he would be willing to turn that down. If he doesn't I feel like I might have users running into problems with my software and I could spend long weeks tracking down the problem to a compiler optimization bug and what would I gain from that? Telling my users to edit the encryption library's makefile(s) and turn it down themselves isn't very appealing to me, but it is an option.

Share this post


Link to post
Share on other sites
This depends a lot on the compiler you're using. In some situations, O3 is considered the "experimental" optimization level; anything below that is considered perfectly safe and compiler bugs are exceedingly unlikely. I would be shocked if your code is sophisticated enough to trigger an as-yet-undetected compiler bug in the non-experimental optimization levels.

In other compilers, all provided optimization levels are considered safe.



Bottom line: read your compiler documentation first, before making twitch reactions based on poorly informed intuition.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
This depends a lot on the compiler you're using. In some situations, O3 is considered the "experimental" optimization level; anything below that is considered perfectly safe and compiler bugs are exceedingly unlikely. I would be shocked if your code is sophisticated enough to trigger an as-yet-undetected compiler bug in the non-experimental optimization levels.


It isn't just my library that I'm thinking about. It is the encryption library as well. I see the non-experimental optimization levels as a continuum with the highest such level being the most likely to still have problems.

Share this post


Link to post
Share on other sites
My point stands: compiler bugs in the well-known and stable compilers are so rare as to not be worth worrying about, even in the presence of aggressive optimizations (gcc's experimental mode notwithstanding).

You're far better off developing a set of simple automated tests to ensure bugs don't creep in from your own code, than worrying about the minute chance of something going wrong in the compiler itself. Remember, you're hardly the only person using the compiler, and there are literally thousands of people out there who have covered the same use cases and tested the code paths you're likely to invoke in the compiler itself.

Just unit test your libraries and be done with it. That way you have protection from any potential faults, and you can still take advantage of modern technology and use compiler optimizations.

Share this post


Link to post
Share on other sites

This topic is 2546 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this