Is optimization for performance bad or is optimizating too early bad?

Started by
18 comments, last by iMalc 9 years, 5 months ago

We are getting to a world where there's no such thing as "fast enough."

Sure, unless you have an actual optimal solution that simply cannot be improved upon further there's always some benefit to optimizing further.

That's great in theory, but in practice we have other considerations. It takes time and effort to optimize software and at some point we have to accept that what we have is "good enough" to ship. It would be nice to optimize further so that start-up times are just a bit quicker, more resources are left free for other programs, and we can be more environmentally friendly, but if the start-up time isn't overly long and customers consider the performance to be acceptable it's often hard to justify continued work - especially if someone else is paying the bills.

Other than that, I agree with the overwhelming majority of excellent comments above as well as those in the topic "optimization philosophy and what to do when performance doesn't cut it" (also linked above by Eck).

This is computer science, not computer voodoo -- you should always use your tools to make proper measurements so that optimization can be an intelligent and properly informed process, but optimizations are a good thing and are often necessary. We also shouldn't use the existence of these tools or some misguided philosophy as an excuse to write bad code or avoid obvious well-known improvements in the first place.

- Jason Astle-Adams

Advertisement

I like to look at it this way...

Is time costing someone else money?
1. Plan (Find the best method)

2. Implement.

3. Test

4.Continue.

Time Costing No One Money

1. Implement.

2. Test.

3. Continue.

5. Finish.

6. Optimize

Check out my open source code projects/libraries! My Homepage You may learn something.

I am a total radical fan of premature optimization. This stems from the fact that I am a very -make it faster- hobby programmer. I realized that writing my projects optimized from the very beginning saves me from refactoring and rebuging my projects if I was to decide to optimize them later on. It keeps me bound to a very good policy when designing my project actualy (OOP only for rare objects, critical data being DOP). But I do leave isolated parts of project not optimized, leaving them able to get optimized later without refactoring outer parts of project. In the end, I am very satisfied how I designed the project, to be scalable, fast, and still having potential to optimize safely.

Big systems needed to be optimized by their architecture and design. If you wait to optimized foundational systems it's often too late to change later.

The full quote from Knuth ads more context. Note that he is talking about small efficiencies.

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

You can only benefit from optimizing your code early when you have had the experience of writing similar code. Otherwise, how would you know that your new-and-improved code actually performs better? You see, for anyone to absolutely sure that the new code performs faster, one has to profile the old and the new code, and compare the results. Meaning, the old code has had to exist first, got profiled, then the new code comes, and get reprofiled, then you get the comparable results. Otherwise, it's all just wild guess!

In my experience, it's much better to write something that's readable first -- a piece of code with clear variable, class, and function names, and steps neatly laid out with comments explaining each step. When you are writing large projects, it's very likely that you would forget what you just wrote a week ago since you would move from one module to another. Writing readable code allows you to revisit that code, reanalyze, and perhaps optimize it when the situation calls.

If you don't know what you are doing, that this is the first time you are implementing the feature, don't worry about optimization -- like don't worry at all.


Consider mobile (including PC laptops). If your game is 12% more efficient, that's 12% longer the player can play your game before the battery dies.

What metric are you using to define "efficient"? I would have thought that the vast majority of a devices power consumption isn't really effected by whether the code it's running is efficient or not.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

Consider mobile (including PC laptops). If your game is 12% more efficient, that's 12% longer the player can play your game before the battery dies.


What metric are you using to define "efficient"? I would have thought that the vast majority of a devices power consumption isn't really effected by whether the code it's running is efficient or not.


On mobile or laptops it's better to peg all cores for less time then to only use 50% of the CPU for longer. You want to get in, do your work, and get out so the CPU can go back to sleep and conserve power again.

On mobile or laptops it's better to peg all cores for less time then to only use 50% of the CPU for longer. You want to get in, do your work, and get out so the CPU can go back to sleep and conserve power again.

This. Although not just on mobile, it applies on desktop as well.

Power, or electricity, costs money (yes, in some countries such as Australia where it costs about 10% of the price in Europe (or so I've been told!) people may be raising an eyebrow now, but really... electricity truly is not free even if you pay less money happy.png ), and component lifetime depends on temperature.

EDIT:

To give a figure, my Windows 7 desktop costs me about 4.23€ (~ $5.39) per month in electricity. The same desktop was running Windows XP Professional until about a year ago, which is just slightly less energy efficient overall.

The same desktop cost me about 5.14€ (~ $6.54) per month in electricity back then, that is a difference of nearly 12 Euros per year.

Now imagine you write some software, and 500 million people use it, and every one is paying 12 Euros per year for nothing... you could as well have harvested another 6 billion in currency instead of having them waste it on electricity.

To give a figure, my Windows 7 desktop costs me about 4.23€ (~ $5.39) per month in electricity. The same desktop was running Windows XP Professional until about a year ago, which is just slightly less energy efficient overall.
The same desktop cost me about 5.14€ (~ $6.54) per month in electricity back then, that is a difference of nearly 12 Euros per year.

Now imagine you write some software, and 500 million people use it, and every one is paying 12 Euros per year for nothing... you could as well have harvested another 6 billion in currency instead of having them waste it on electricity.


If you are writing the most widely used desktop OS on the planet, I 100% agree that you should work as hard as you can to make it as efficient as humanly possible.

On the other hand, if you're writing a game/app that will be used by 100000 people (if you're lucky), you are almost certainly going to waste more energy spending the time to optimise it (development time costs energy) than you'll ultimately save.

Besides the environmental question, there is also the question of your responsibility to your employer. If the game/app/whatever works on the target platform and is good enough, unless you can demonstrate that what you are doing will increase revenue, you're wasting your employers time.

All of this is a matter of judgement in individual cases of course, but there are plenty of instances where "fast enough" is fine.

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight
You can optimize til the cows come home and you DO enter a point of diminishing returns as time goes on. But that doesn't mean you shouldn't optimize or pick a slow algorithm over a fast one when there are no reasons not to use the fast one.

However there are still some nuances. If you're making something for a PC or console then you're going to want to get the highest sustainable framerate you can so your optimizations are going to focus on reducing frame time. If you're making a game for a mobile device you may want to optimize and then cap your framerate so the CPU has time to rest and not chew up battery.

And before anyone asks, no, 30fps is not "more cinematic". (At least get your facts straight... cinemas are 24fps!) They may be willing to sacrifice framerate, responsiveness, and resolution to make things "more shiny" but it's certainly not "cinematic".

In the end:
1. Write readable, maintainable, and correct code.
2. Don't pessimise.
3. Measure.
4. Optimize what your measurements tell you to optimize.
5. Repeat til you meet your performance requirements.

This topic is closed to new replies.

Advertisement