Is optimization for performance bad or is optimizating too early bad?

Started by
18 comments, last by iMalc 9 years, 5 months ago
I'm not sure if the two are the same or if there is a correlation. Is optimization all about taking consideration the space and time complexity of an algorithm( How much ram the algorithm use up and how long it takes the algorithm to execute data passed into it)
Advertisement

Optimizing is bad if your "sub optimal" code is actually running perfectly fine on the target system and you have more important things to do.

Optimizing code that you haven't tested is bad because you are most probably fishing in the dark.

You'll see this phrase every so often "Premature optimization is bad". It's true to some degree, but some people misinterpret this as an excuse to write crappy code until the CPU cries for help. Only THEN will they even think about performance optimizations. Don't do that. :)

L. Spiro does an excellent job of explaining this here:

http://www.gamedev.net/topic/661044-optimization-philosophy-and-what-to-do-when-performance-doesnt-cut-it/?view=findpost&p=5180845

- Eck

EckTech Games - Games and Unity Assets I'm working on
Still Flying - My GameDev journal
The Shilwulf Dynasty - Campaign notes for my Rogue Trader RPG

We are getting to a world where there's no such thing as "fast enough."

Consider mobile (including PC laptops). If your game is 12% more efficient, that's 12% longer the player can play your game before the battery dies.

Consider servers. If your app is 5% more efficient, that could add up to many times your salary just in electricity savings for the data center running the app.

On desktops it matter less, but only because we fail kinda hard at being environmentally conscious. For almost every other domain of application (which are increasingly used for gaming) there is a measurable benefit to even a 1% improvement.

PC and console gaming is still able to operate with a "only optimize when you need it" mantra, but only barely so. To quote more than a few different tech leaders, the mentality that "premature optimization is evil" is why Word takes 10x longer to open today than it did on significantly weaker hardware over a decade ago. (I'm not in 100% agreement with that, but not in complete disagreement either.)

This is the difference between novice/junior developers and senior/lead engineers. The experience helps them identify more clearly up front what is actually premature optimization and what is just good sense for a first-cut design.

Sean Middleditch – Game Systems Engineer – Join my team!

This is a confrontational topic at best! Performance comes from data optimisation (for the most part).

Let's say function A has to handle badly organised data. You could spend an eternity maximising it's throughput. But let's say functions B, C & D rely on that data being in a certain format! By making function A faster you will make functions B, C & D slower. Take away - know your data!!!!!!!! ( I can't stress this enough!)

First you measure. Then you look at your measurements and figure out what -- if anything is wrong.

That usually starts about halfway through the project. Just measuring. No changing code. Not yet. Measure early. Measure the size of static space requirements in a log that gets updated daily or in every build. Measure the size of buffers and high water marks that gets automatically updated in smoke tests or automated test. Measure performance values. Have all of them automatically generate log entries so you can track the performance over weeks and months.

After you've measured, and after you have identified a few things that are wrong, you make changes to make that specific thing better.

Then you repeat, periodically, through the end of the project.

The details of exactly what you change are very specific to what you identified.

Usually at first there are some blatantly obvious things. You'll see functions like strlen() inside deeply nested loops. You'll find items added to containers that are too small, causing a large number of reallocations and copying. You'll find lots of calls to empty virtual functions. You'll find searching implementations that take way more time than they are budgeted.

Other times you will notice things by watching the logs. Suddenly the high water mark will be 5x or even 500x higher than before, and you need to track back to where it was introduced, and why. Or you'll notice the data segment is suddenly huge, and you'll want to find out why. Or you'll see that you were following a certain growth rate and suddenly changed slope to a very rapid growth rate, and you'll want to track it back to the source. Having comprehensive metrics in regularly updated logs is very valuable.

When it is time to change things, use your profiler and other measurement tools. Measure carefully up front, then change one thing, then measure again. It is a slow process, but take the time to do it right. Depending on the difference in the results either discard it, submit it, or made additional changes and repeat. ALWAYS MEASURE because sometimes you may think your change is faster, only to discover later it makes things worse or has other negative performance side effects.

Over time the number of things you can find and fix starts to dwindle. As the clock ticks on you'll find big structural things that could be replaced, but due to the time left in the project, decide not to do it because of the risk.


As for the correlation, yes, you very often can exchange execution time for data space. Lookup tables are an example, you can pre-compute 10,000 answers, which means you have the cost of storing and looking up the data, but it can be faster to load a 160KB data table than to run big computations very frequently. Other times it is about picking a different algorithm, or looking at bugs in the implementation, or just changing the access patterns into cache-friendly formats (currently that means mostly linear, sequentially accessed, in 64 byte units).

i use the following approach:

one of the first things i put in a new game is a fps meter so i can track overall basic performance as features are added.

when adding a feature, i consider 3 things:

1. existing algos - whats fast and whats not.

2. brute force - often fast AND simple

3. what code (syntax) is fast and what code isn't

then i select the simplest to implement algo that seems fast enough and implement it.

if the implementation performs as expected, i move on to the next feature.

if there's an unacceptable performance hit, then i profile and optimize.

this keeps things more or less optimized as i go along.

occasionally small performance hits from many new features can start to add up and slow things down. then i profile and optimize.

since i select algos and syntax with speed in mind upfront, and sort of optimize as i go along, i find i have little or no optimization required at the end.

i've had to change development languages twice (basic to pascal to c/c++) and compilers three times (basic to pascal to watcom, to MS) all in the quest for speed. so i've learned to code with speed in mind from the get go.

note that this doesn't mean i engage in premature optimization. i will only select a more complex yet faster algo if i know from experience or research that a simpler slower algo won't cut it. so i favor stuff like AABB vs more complex intersect tests etc. but i DO make an effort to not use slower code when faster code is is just as easy. so i'll tend to favor static arrays over things like template based linked lists etc. mostly this is in the form of using older syntaxes that have less built-in error checking. stuff like #define vs enum. not exactly recommended for the inexperienced progammer or for group projects. i can get away with it because that was all there was to work with when i started in game development, so i'm well aware of the "gotcha's" you have to look out for.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

To quote more than a few different tech leaders, the mentality that "premature optimization is evil" is why Word takes 10x longer to open today than it did on significantly weaker hardware over a decade ago. (I'm not in 100% agreement with that, but not in complete disagreement either.)


Honestly, I would disagree with it completely if that's the full extent of their usage of the quote; the whole point of the 'premature optimisation' thing isn't "don't optimise until you need to" but "don't try to optimise until you have profiled" with the rider "but don't write dumb code to start with either..".

People who throw quotes away in a lazy manner annoy the fuck out of me because they mangle the message to serve their own agenda...

It's worth to look at the full quote, since contrary to the usually misquoted fragments of a sentence there is actually a lot of wisdom in it.

There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

I've underlined a few words which I find particularly important.

What is Hoare saying here?

  1. Don't waste your time worrying and thinking about noncritical parts that won't really matter in the end.
  2. Identify what's critical first.
  3. Then spend your time on optimizing that, and don't miss that opportunity.
  4. Screw the rest.

Nowhere is being said that you shouldn't optimize early, or even that you shouldn't optimize at all. Optimizing early is good, it's just bad when you waste your time too early, that is before you actually know what matters.

We are getting to a world where there's no such thing as "fast enough."
Consider mobile (including PC laptops). If your game is 12% more efficient, that's 12% longer the player can play your game before the battery dies.

.

This is very true. But again, what's critical is what really matters. Many people optimize in a manner "as long as I get 60fps, it's good, it needs not be faster". This is, of course, an opportunity worth optimizing. Even if you don't see a difference between 60fps @100% CPU and 60fps where the CPU is sleeping 50% of the time waiting for vertical sync, you certainly see the difference on the battery drain.

Sadly, a lot of development teams still go by "if we meet the minimum frame rate with the minimum spec, it's good enough". Well, it's a valid point of view, in some way... since you're not getting paid more for a better program. But as mobile stuff evolves, this sure will have to eventually change.

On the other hand, whether it takes 1.5 seconds or 2.0 seconds for your game to start up makes no difference. Although naively thinking, that's 33% more energy consumed, it doesn't matter since that happens just once, not all the time. The user likely won't notice either. Optimizing here is kind of pointless.

Sometimes I come across code high in inefficient habits(and with superfluous methods and fields). I find getting into a habit of designing code to be efficient early on is very helpful. The above posters give great advice.

This topic is closed to new replies.

Advertisement