Simple & Robust OOP vs Performance

Started by
19 comments, last by rockseller 11 years, 7 months ago
Good day peeps,

I'm going to talk about a project I started, a 2D Game, aimed for Android 2.2 and above, OpenGL ES 2.0, and being coded with the default Android Java SDK (up-to date), rather than the NDK framework.

Ever since I started developing a platform RPG game, I took special care of the micro code optimization, because I felt it would have a big impact in the long term, specially because It's serious project, involving help from other designers and creative guys;
In terms of performance, the experts say that you should profile your game, and keep measuring it on real devices in order to see where your bottleneck really is.

In my experience, and please flame me if you feel I'm wrong, for a time consuming and serious project, this is not 100% true, since you spend time developing tools for other guys to work within your engine framework, and layout the way Bitmaps should be drawn in order to be used as textures (standardizing them), and most of the time you need to save time by optimizing parts of the code (assuming you have a good code design) by the very beginning, in parts where you do know that you will have a performance boost
(I.E. reducing the possibilities for the Garbage Collector to run)

As a project leader and coder, I have realized that in the end is better to have a robust and well designed Class, rather than an "optimized" all-in class.

For instance, I had a class called:

Monster
That had 100 methods



Sure it had a lot of micro optimization inside those methods (good bye readability, I know, good practices, still!!)

But after some time, I figured out that iwas better to have a class:
Monster
With 5 methods, and 3 objects inside
MonsterAttackModule
MonsterAIModule
MonsterBodyModule

With the second case, I still have some micro code optimization, more readability, but more objects
That's more memory, isn't it?

So my point is that in the end is better to have a balance of readability, and a robust OOP Code, micro-code optimize up and there, and profile as experts says, making the original sentence, not completely true, right? :)
Advertisement
1 class with 100 methods VS multiple classes with a few methods is not really about performance but more about sensible design.

Why do you think one "mega class" would perform better?
How can you optimize code without profiling? How can an optimization be effective if you don't know whether you're dealing with a bottleneck or not?

I've worked on some larger projects myself, and I can tell you for sure that premature optimization is something you definitely want to avoid.

Write your code according to the proper standards. If you're working with OOP then make sure you properly follow the rules of an object oriented design. A class with 100 methods is definitely a serious violator of the single responsibility principle, which is an enormously important concept within OOP.

If you find that you're having performance issues after following the proper standards you should not resort to micro-optimizations, but you should look at optimizations at a larger scale. Rewrite algorithms which pose a bottleneck, go over some larger systems and see which components could be altered to run faster, etc.
If this still shows performance problems you should probably consider whether it's the design itself which just isn't optimal for the problem you're trying to solve, micro-optimizations should be at the very end of the optimization progress when every other step did not provide a proper solution IMO.

I gets all your texture budgets!


1 class with 100 methods VS multiple classes with a few methods is not really about performance but more about sensible design.

Why do you think one "mega class" would perform better?


From developer.android.com best practices:
"Object creation is never free. A generational GC with per-thread allocation pools for temporary objects can make allocation cheaper, but allocating memory is always more expensive than not allocating memory."

Thus, you should avoid creating object instances you don't need to..

Unless I'm wrong, having 1 class with 100 methods, over 1 class, and 10 subclasses, each one having 10 methods, will result in
1 object vs 11 objects

What do you mean with sensible design?





How can you optimize code without profiling? How can an optimization be effective if you don't know whether you're dealing with a bottleneck or not?

I've worked on some larger projects myself, and I can tell you for sure that premature optimization is something you definitely want to avoid.

Write your code according to the proper standards. If you're working with OOP then make sure you properly follow the rules of an object oriented design. A class with 100 methods is definitely a serious violator of the single responsibility principle, which is an enormously important concept within OOP.

If you find that you're having performance issues after following the proper standards you should not resort to micro-optimizations, but you should look at optimizations at a larger scale. Rewrite algorithms which pose a bottleneck, go over some larger systems and see which components could be altered to run faster, etc.
If this still shows performance problems you should probably consider whether it's the design itself which just isn't optimal for the problem you're trying to solve, micro-optimizations should be at the very end of the optimization progress when every other step did not provide a proper solution IMO.


My point was to explain my realization about the fact that you can't follow a guide line or best practices all-in;
You need to make a balance, you should profile your application, but not be naive about optimizing your application at some extend.

An example of micro-code optimization, without profiling first


code not micro-optimized (java):


...


//A random ArrayList of an object
ArrayList<AnotherObject> arrayList = new ArrayList<AnotherObject>(100);

//A vector we will be using
Vector3D utilityVector3D = new Vector3D(0,0,0);


for(int i = 0; i = arrayList.size(); i++)
{
AnotherObject obj = arrayList.get(i);

utilitVector3D.x = obj.x;
utilitVector3D.x = obj.y;
utilitVector3D.x = obj.z;
}


...

code micro-optimized:


...

//utilityVector3D is not a member of the parent class of the method
//--- Comented, instantiated at the constructor Vector3D utilityVector3D = new Vector3D(0,0,0);


//A random ArrayList of an object
//--Commented and instanciated too --ArrayList<AnotherObject> arrayList = new ArrayList<AnotherObject>(100);
arrayList.ClearAll();

final int sizeList = arrayList.size();

for(int i = 0; i = sizeList; i++)
{
final AnotherObject obj = arrayList.get(i);
utilitVector3D.x = obj.x;
utilitVector3D.x = obj.y;
utilitVector3D.x = obj.z;
}


...



Here you choose to avoid the Garbage collection to happen at the expense of some Memory.

My point was that it is good to do some micro-code optimization now and then, but at the same time balance by using simple and robust objects, as my pointed example of the 1 object vs 11 objects above.


Am I wrong?
Thanks!
Moving you to General Programming -- Game Design is for discussion of game mechanics and balance rather than code design. cool.png


A few thoughts:

If you know a certain technique will yield better performance on your target platform and that implementing that technique will not impose undue difficulty when compared to a simpler approach then use of that technique is not really optimisation (micro or otherwise), it's just being sensible. However, this only applies if you know -- from extensive prior experience, because it is well established best practice, or because you have profiling data -- the technique to be better, not just because you think it will be better, a few unsubstantiated online sources claim it might be better, or some line in the documentation can be interpreted as suggesting -- as opposed to clearly stating that -- it is better.


It's true that garbage collection running too often or at undesirable times can negatively impact performance -- particularly on hardware limited devices such as mobile platforms -- and is an area you should pay careful attention to, but using excessive amounts of memory can also be problematic, and sometimes the garbage collector will run without negatively impacting your performance. Again -- unless you know that garbage collection is going to cause a problem, unless it's an idiomatic usage to avoid it in a specific situation on the target platform, or unless you have prior experience of it causing problems on the platform -- you're probably better going with whichever method is simpler and more natural to implement until you have profiling data that shows it's actually causing a problem.

Remember that compilers and other tools are generally designed to provide the most benefit to the most common usage patterns; doing something differently than the "normal" way without a clear benefit may incur other unexpected optimization costs.


"Simple and robust OOP" -- if properly implemented -- should not be something you necessarily need to trade off against performance in the majority of cases. As with everything, don't use OOP where it isn't appropriate, but when you do use it make an effort to do so properly and you'll find that in most cases it doesn't introduce any undue performance overhead.

- Jason Astle-Adams

Oh, and one more:

When developing in a garbage collected language -- especially on a platform with limited capabilities -- be aware of how calls to standard library or third party code may be allocating memory. You can optimize your own code as much as you like, but if the problem is coming from how you're using -- or in some cases even the fact that you've decided to use -- some standard or third party code then it won't do you any good. Don't take this to the extreme of not trusting any other code or following silly blanket-rules of not using anything that allocates memory however, or you'll kill your productivity and risk introducing additional bugs into your less-well-tested alternative code.

This is also yet another example where profiling to find the real source of problems can avoid unnecessary work and allow you to focus your efforts on genuine problems. Use standard libraries and useful third-party code where appropriate, and replace things or change your usage patterns if profiling data shows them to be the cause of problems.

- Jason Astle-Adams


Unless I'm wrong, having 1 class with 100 methods, over 1 class, and 10 subclasses, each one having 10 methods, will result in
1 object vs 11 objects

What do you mean with sensible design?


1) Unless you're spawning many monsters every second, the overhead of spawning 11 objects per monster instead of just 1 is negligible and typically not worth compromising your design. This is in fact a perfect example of why you shouldn't engage in these types of "optimizations" without analyzing your bottlenecks first. If it has no impact on the overall performance of your application, it is a useless optimization.

2) When you have something like MonsterAttackModule, if it is primarily just a collection of methods and doesn't manage state, you can usually get away with only having a single MonsterAttackModule instance per monster type, thus incurring no additional allocation overhead per monster. Or perhaps just a collection of static methods, and thus incurring no allocation overhead at all.

3) Paying more attention to your higher level design can produce performance benefits many orders of magnitude greater than your micro-optimizations. Compromising such a design in any way in favor of micro-optimizations without profiling will almost always hurt you in the long run.


This doesn't mean that you should never think about low-level performance issues as you write your code, as you should, but only where you know it will make a significant and noticeable difference or that it won't compromise your design in any meaningful way.

i.e. block-copying memory instead of copying memory a byte at a time is always a good idea because it has a significant performance impact and doesn't usually result in any significant design compromises. Doing things like string1.append(string2) instead of string1 += string2 is also usually a good idea as it usually eliminates the creation of temporary objects, and also has no impact on your design.

Creating a monolithic class with 100 methods because you think it's faster to allocate is definitely not a good choice for a premature optimization.
You're using Java, which immediately destroys your goal of maximum performance. Java simply does not give you enough low-level control to achieve some important optimizations (such as cache coherency, branch prediction hints, etc...), so your micro-optimization efforts are futile at best. Besides, I'll be surprised if you manage to measure any difference in runtime between your two classes with a half-modern system, so stop worrying about performance.

Good design naturally yields good performance, but the reverse is not true. Therefore, build your project with design in mind, and add any optimizations at the end if they are readable (for instance, inserting three hundred lines of assembly into a twenty-line method to optimize some algorithm is probably not the way to go - if you really need the speed, write the optimized code in a different routine and call it from your method) and meaningful.

Also, interpreters and compilers are very good at noticing common programming patterns (such as copying memory byte per byte) and optimizing them in an optimal manner using the available hardware (such as SIMD instructions). If you try and optimize trivial things yourself, the compiler/interpreter might not understand the pattern and not optimize anything, resulting in slower performance overall.

Now there *are* things worth optimizing to death - for instance, a general-purpose pointer-based binary search is a well-established algorithm, which can be (provably) correctly implemented in the most efficient way possible, and stored in a code library for easy reuse in any language and project - nobody will ever need to change, or even look at this code, and it can be considered an optimally efficient black-box binary search. However, this is not the case for 99% of code, which needs to constantly be checked for bugs and upgraded in various ways: if this is the case, maintainability beats performance by a large margin.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


1 class with 100 methods VS multiple classes with a few methods is not really about performance but more about sensible design.

Why do you think one "mega class" would perform better?

Because he is using Java on a mobile device.

Back when dinosaurs roamed the earth my job was mobile development with Java.
In-house there was a running gag related to the fact that mobile devices supported only Java, yet Java was the worst choice due to how much overhead there was in classes etc.

We were basically limited to a maximum of 3 classes per game, but urged to use only one.


That being said, we no longer live in a world of Nokia. Android devices are much more powerful and have much more memory than those of old times, and we really don’t need to care so much about this type of overhead. On any modern device, you are wasting your time if you are worrying about this kind of thing.


But there are still optimizations that you should do whenever possible.
For one, count down to 0 in for-loops whenever possible.
In Java, it looks like this:
for ( int i = iTotal; --i >= 0; ) {
}


This is also faster in C/C++ (many ways to compare against 0, and those instructions are smaller and faster), but in Java it is a major help. It reduces the size of the code itself significantly while also providing a faster iteration.
If the order of the loop does not matter, you should always always always do it this way.


There are a lot of little things such as this that can help, but I can’t pull them off the top of my head.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

The false conclusion - less objects equals better performance - is in itself an excellent example of the need for profiling to identify areas to optimize.

In most languages, methods do not form any part of the memory footprint of an instance of a class. There is some overhead with an allocation and there can be some minimal overhead with a class instance but these are so highly unlikely to be significant in anything other than highly specialized corner cases that your effort is wasted.

Even if you do prove via profiling that these issues are relevant, there are ways to address them without sacrificing good design using alternative allocation strategies and so on. If the language you use is not sufficiently low level to support these options.

But you haven't, so assume it isn't until you do.

This topic is closed to new replies.

Advertisement