Jump to content
  • Advertisement
Sign in to follow this  
Nicholas Kong

Is squeezing performance out of a console the same as premature optimization?

This topic is 2153 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How does a programmer squeeze the performance out of a console and is it a collaborative effort?

 

Is squeezing performance out of a console the same as premature optimization?

 

Is squeezing performance out of a console done fairly on when the game is being developed for the console or when the game is halfway done or when the game is closed to the deadline?

 

Is there any similarities with squeezing performance out of a console versus optimizing an application?

 

This thought dawned on me when I knew I could optimize my collision method in my 2D game by adding an extra condition to force the logic to only execute at a certain point rather than executing unnecessarily.

 

Does the premature optimization rule still hold today? The term was coined in 1979 and well a lot of time has passed since then. But I figure optimizing something like collision logic in my game will help the game performance. While  I ran the game with or without the optimization, I have not seen any noticeable difference but the time I used up was probably 2 minutes so no harm was done and I feel the optimization will help in the long run when I make the game have more enemies.

 

I hope a lot of question is allowed. I figure I post them all here since all of my later questions relate to my first question?

 

Share this post


Link to post
Share on other sites
Advertisement

Squeezing performance and premature optimization are not the same. There is no single definition of premature optimization, really. You want to think about performance early on and make correct architectural decisions that allow you to optimize and run your code efficiently. Good architecture that was built with the platform's performance characteristics in mind is vital.

When I think of premature optimization I typically think of shortcuts that were taken before the problem space was fully realized. Sometimes you can "optimize" something in a way that doesn't benefit you at all (in terms of design OR performance) later on because you applied the optimization before you really knew what kinds of problems your code was going to be asked to solve.

EDIT: Another similar optimization pitfall is performing a bunch of "optimizations" without really knowing why you're performing them. Profile! Make optimizations where necessary. Time is limited, you don't want to spend ages rewriting code just for a 0.01ms speedup. It's tempting to try and fix things you "know" (read: feel) are slow. Don't. Fix thinks you KNOW (read: have evidence for) are slow. That means profiling and experimenting and seeing what works and what doesn't in terms of performance.

Ah okay. Thanks. Great answer.
 

Share this post


Link to post
Share on other sites

+1 for L. Spiro, Although premature optimization is the root of all evil: good programmers are lazy, and wasting your time optimising things that don't matter is a bad hangover I see from ASM programmers in the past (I started just after that, PS1 time). Use a profiler, optimise what takes the most time to run.

 

Picking a better algorithm normally beats low level optimisations anyway.

Share this post


Link to post
Share on other sites


A “premature” optimization is... an optimization that either makes things slower or causes bugs.

It's also pretty much any optimisation that influences architectural decisions early in your project.

 

The cost of large refractors increases dramatically as a project progresses, which makes it imperative that your architecture be sound from the get-go. If you compromise early on architecture because of beliefs about performance, you risk paying the refactor tax many times over. Performance hotspots tend to change over the course of development, whereas architectural challenges don't.

Share this post


Link to post
Share on other sites

For me, a premature optimization is an optimization that adds complexity to the code and possibly makes it harder to read that's made before it has been determined that such an optimization is even needed. There's nothing wrong with writing fast code and optimizing early. I always try to do this, but after examining my projects needs, sometimes I choose the easier albeit slower implementation/algorithm over the faster and more complicated (and potentially buggier) one. If later I find that it's a bottleneck, I'll change it then.

 

When working on hobby projects as the lone coder, this is important. Take this example. A programmer is working on a simple 3D game and uses a 2D array to keep track of his objects locations in 3D space. Its simple and it works well, but he hears that 2D arrays are slow and quad-trees are faster so he changes his code. He's new to quad trees and adds a bunch of bugs. He's not sure if the bugs are from the quad tree or other parts of the code. On top of that, we didn't even have that many objects so using 2D arrays was actually more than fast enough for his project.

Share this post


Link to post
Share on other sites
The "premature optimization" advice is one of the most misquoted rules of thumb in programming. It's still good advice, but we interpret it completely differently to Knuths original meaning.
He wrote than in a 1960's article about proper use of the goto statement -- a move towards function/procedure based programming that we use today!! He was arguing that using goto isn't bad as long as you're following certain structures (like calling a function, then returning to the same place, as we now do with the 'call stack').
To him, a premature optimization was one that made the code unreadable or harder to reason about (e.g. spaghetti code) for the sake of saving one clock cycle...

Modern commentators will use Knuth's phrase, but with entirely new, modern advice implied.

(1)
It's also pretty much any optimisation that influences architectural decisions early in your project.
 
(2)
The cost of large refractors increases dramatically as a project progresses, which makes it imperative that your architecture be sound from the get-go. If you compromise early on architecture ..., you risk paying the refactor tax many times over.

I would disagree with (1), because (2)! ;-)

If early architecture choices do lead to performance problems a the end of a project, then you're screwed.
Wide-reaching architectural choices, which underpin the rest of the code-base, are one time where you have to use a lot of care.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!