I have some new habits.

Started by
79 comments, last by Tutorial Doctor 10 years ago
If gravity is in a class, then all you would have to do is change parameters (variable values) to get real-world gravity and non-real-world gravity or something else entirely. If gravity is a function, then it's coupled to whatever class or game you're using. So you'll always have to change the function to suit whatever class or game you're making. So reuse is out the window.

Please someone correct me if I'm wrong.

Beginner in Game Development?  Read here. And read here.

 

Advertisement

Object orientation is about more than using code in more than one project. The point about encapsulation has nothing to do with re-use, it has to do with hiding implementation details and coupling disparate parts of a system together via well-defined interfaces.

As I said before, once we get into discussions about data layout and performance, we are into a different ball game. There the easiest design is compromised in order to better exploit hardware setups. It is not easier to design so that all of your particle's positions are in a separate contiguous array to their texture indices. It just happens that you'll get better peformance like that. Unless performance is an issue (which it almost always isn't outside of specific domains) then prefer easiest to maintain design over premature optimisation.

Again, badly used OOP can be a huge hinderance - novice users tend to fall into the trap of thinking "I need a monster class - how can I design this so generically that I can use it in any application that has monsters" and of course end up in a mess. Re-use is not the advantage of encapsulating data behind well-defined interfaces and I don't see how anyone with real-world experience could think it was.

Most of the (strawman) objections to OOP seem to be based around ugly, deep inheritance hierachies, which I mentioned in my first post on this subject. Good OOP is not about deep inheritence trees any more than it is about re-use.

[Although deep inheritance can be useful in some circumstances. Most of the popular GUI libraries are based on this concept and it that particular domain, it is appropriate.]


http://www.insomniacgames.com/three-big-lies-typical-design-failures-in-game-programming-gdc10/

I actually took the time to read the PDF out of curiosity (though, it is kind of misleading to post two links, when one is just a couple paragraphs and a link to another).

What I got from it is completely different from what you are saying. To me, the code design parts were about better cache usage, programming with concurrency in mind by using (hopefully) non-blocking calls and different control flow, and not loading data that isn't needed, etc. It did highlight potential problems arising from approaching a problem from what I'll call an "academic" standpoint (useful for understanding, poor for implementing), which would be attempting to build code as an analogy to real life and our overly-abstract thinking, when the hardware works very differently than our picture of the world around us.

A good example that he had is array of structures vs. structure of arrays: an inefficient array of key => value pairs, instead of an array of keys that resolve to indices into an array of values, to not load values into cache when you're most likely to discard them. If you have to iterate over the keys, you'll stop after find the value that you want to read, so if you touch n keys and 1 value, n - 1 values are also loaded into cache between the keys and discarded. Separating them is better for cache reasons, among others (lookup by value is easier).

All very good advice, including not proving interfaces that are so abstract that they are either misleading, or inefficient (update() loads data, transforms it, and it gets dumped from cache as it moves on. draw() then loads the data into cache again to render output [potential misnomer in method name], making the abstraction inefficient due to poor design).

None of what he was saying is bashing OOP, in my opinion. Referring back to the aforementioned example of a key => value dictionary, he didn't advocate getting rid of the object entirely, just redesigning how it works on the inside; the interface may even go unchanged!

So, to the original poster, please don't mistake the photos of the presenter's notes (when taken out of context) to be bashing OOP. OOP is valuable and useful, you just need to use your head when designing your codebase.

After all, the vector object that we all likely use in one flavor or another is just that: an object. Would one really like to go back to manually allocating memory,

performing all of the actions that the object does (where they don't have special needs), just to be rid of OOP?

To go back on topic, some comments:

  1. We have true and false for a reason. Defining a few synonyms will only lead to confusion and inconsistency.
  2. Having two properties that are supposed to be inverses of each other, yet are two separate objects that can lead to a discrepancy is bad. If you absolutely must, have one property, and use the NOT operator, e.g. if(!light.on). This leads me to the next point:
  3. Attempting to make a programming language and interface follow English grammar may be humorous in the short run, but you'll guarantee that you'll wind up misnaming something or misrepresenting what something does in order to maintain the fuzzy grammar.
  4. 
    Begin()
    Start()
    Commence()
    Play()
     
    End()
    Finish()
    Conclude()
    Stop()
    

    I can't tell them apart, and in a few months, neither will you. I will concede that at least you followed the begin/end, start/finish convention (another one is create/destroy. Pair it with something else, and there may be head-scratching as people try to guess the inverse operation).

  5. speed does not stand out to me as a boolean value. As noted, this should be an enumeration, or more likely, a scalar value actually representing the speed. If it is a boolean, try naming it like one, i.e. isAlive, wasInitialized, isOpen, etc. Something that we can clearly see that it must be either true or false. I would never look at "isOpen" and think "Eh... its kind of open... let's make it 0.5". Now, it sounding cute to say if(isOpen) is just a side effect, not a design consideration. Looking at the variable name should give you an idea of what it means and what possible value it has (especially in a dynamically typed language). I have no ideal what slowly is by looking at it; if it is only obvious when put in a function call what it is, then it's a novelty, and confusion fodder. Heck, even given the fact that it is a boolean, I have a hard time figuring out why or where it would be set to either state from its name, so I'd have a hard time interpreting its significance without looking it up every time I use it.

The others had excellent points (aside from the potential flame war), as well.

The only good thing about OOP is data encapsulation and the best case scenario for OOP is binary interface classes where the user has no access to the inside code.

When it comes to inheritance I am basically against both shallow and deep ones as both create a co-dependence on some pre-set design and therefore consequently create problems with code re-use as you are stuck to a certain pattern.

For example the gravity function would be re-usable but not a gravity class given that the in parameters are raw data such as arrays of floats or similar. What is even more interesting is when it comes to performance where OOP basically has to pack it and go home.

The root of all evil and the great temptation to do wrong is as hodgman pointed out, the article pointed out and as I have pointed out Object Oriented Design.

OOP in itself is just "a neutron bomb kit" but combined with OOD we get the real danger :-) Then the neutron kit is assembled and presented as something great when if fact it will just blow up and leave a wasteland to the code.

OOD in this case being UML design, finding objects from problem specifications etc. Basically what many program design books are preaching.

The best of both worlds is possible and as pointed out several times OOP when treated as structures or the very naive part of OOP is used can be a good tool.

One example I can come up with is a recent project where I did a 3D voxelisation of a brain. The brain model was in fact based around a tiny structure of voxels. I fed them to different functions that did the voxelisation, the bounding volume hierarchy, ray tracing etc... And it was all done without the bounding volume hierarchy knowing about the brain or even voxels for that matter.

I have some quite nice large functions that take the voxel data as in parameters and transform this data into other structures. It's beautiful.

At the same time having some form of OOP was good for the brain model where I hid the 3DS loading in another class and lo and behold I did use sub-classing for the GUI system as it made sense to have some parts virtual functions.

When OOP comes up however most of the time OOD is implied and that has killed many promising projects by over-engineering problems with complexity that was never there.

http://www.insomniacgames.com/three-big-lies-typical-design-failures-in-game-programming-gdc10/

I actually took the time to read the PDF out of curiosity (though, it is kind of misleading to post two links, when one is just a couple paragraphs and a link to another).

What I got from it is completely different from what you are saying.

[...]

None of what he was saying is bashing OOP, in my opinion. Referring back to the aforementioned example of a key => value dictionary, he didn't advocate getting rid of the object entirely, just redesigning how it works on the inside; the interface may even go unchanged!

So then why is the author summing it up with?

(Lie #1) Software is a platform
I blame the universities for this one. Academics like to remove as many variables from a problem as possible and try to solve things under "ideal" or completely general conditions. It's like old physicist jokes that go "We have made several simplifying assumptions... first, let each horse be a perfect rolling sphere..."
The reality is software is not a platform. You can't idealize the hardware. And the constants in the "Big-O notation" that are so often ignored, are often the parts that actually matter in reality (for example, memory performance.) You can't judge code in a vacuum. Hardware impacts data design. Data design impacts code choices. If you forget that, you have something that might work, but you aren't going to know if it's going to work well on the platform you're working with, with the data you actually have.
(Lie #2) Code should be designed around a model of the world
There is no value in code being some kind of model or map of an imaginary world. I don't know why this one is so compelling for some programmers, but it is extremely popular. If there's a rocket in the game, rest assured that there is a "Rocket" class (Assuming the code is C++) which contains data for exactly one rocket and does rockety stuff. With no regard at all for what data tranformation is really being done, or for the layout of the data. Or for that matter, without the basic understanding that where there's one thing, there's probably more than one.
Though there are a lot of performance penalties for this kind of design, the most significant one is that it doesn't scale. At all. One hundred rockets costs one hundred times as much as one rocket. And it's extremely likely it costs even more than that! Even to a non-programmer, that shouldn't make any sense. Economy of scale. If you have more of something, it should get cheaper, not more expensive. And the way to do that is to design the data properly and group things by similar transformations.
(Lie #3) Code is more important than data
This is the biggest lie of all. Programmers have spent untold billions of man-years writing about code, how to write it faster, better, prettier, etc. and at the end of the day, it's not that significant. Code is ephimiral and has no real intrinsic value. The algorithms certainly do, sure. But the code itself isn't worth all this time (and shelf space! - have you seen how many books there are on UML diagrams?). The code, the performance and the features hinge on one thing - the data. Bad data equals slow and crappy application. Writing a good engine means first and formost, understanding the data.


When it comes to inheritance I am basically against both shallow and deep ones as both create a co-dependence on some pre-set design and therefore consequently create problems with code re-use as you are stuck to a certain pattern.

You seem to have very strong opinions. Perhaps you are just young? (just a guess).

I consider inheritance a pretty advanced tool, to be used sparingly. In C++, having an abstract base class, several classes that publicly derive from it, and a factory function (which is the only part of the code that knows about the subclasses) is a pattern I find myself using here and there. It's a basic mechanism to obtain polymorphism. A similar mechanism in non-object land is a function pointer (or a few of them); but this limited type of inheritance is more powerful and more descriptive in many situations. Do you have a problem with this usage? Do you dislike all forms of polymorphism or dynamic dispatch?


We've gone ahead and designed our data-structures without knowing what kinds of queries/transforms we're going to be performing on them!
[...]
If you design your model before you know the answer to that, then your model is going to be horribly inadequate.

Well, a good model should contain more than just data structures. You want your information about students and classes to be persistent? Then you need a repository, and it should go into the model. You want teachers to be able to print class reports? The model also needs a reporting service. Now, with these services and other things in the model, you should better be able to see if those student-techer-class classes are adequate or not. A good model that is naturally aligned with the "real world" is easier understand and communicate, which is the main purpose of a model. Btw, regarding the "real world" (i.e. the problem domain and its requirements), there's no such thing as a correct model, only a useful model.

Note that not all code is related to model. There's usually a lot of support code. E.g. the student admin system needs some database code, or perhaps some ORM, and this choice dictates the implementation and may even shape the model itself (you usually have a good idea about this when you start modeling). But to say that your code shouldn't "be designed around a model of the world" just seems very backward.

openwar - the real-time tactical war-game platform

What is this nonsense about 100 rockets costing 100 times more than one rocket?

How is this the case with OO but not with raw data?

TLDR; but don't effectively rename Boolean values redundantly -- in any context that is truly binary, 'true' and 'false' are clear. If you wish to express them in other ways are a more natural semantic fit for the API, then use semantically-named functions that return or set a Boolean state. If the context is not clear enough for true/false to suffice, you probably want something like an enumeration anyhow.

The reason to prefer normal bools (indeed, any built-in type) is for increased familiarity of other programmers and to reduce friction among your own APIs and with others. If you use some oddball home-brew type to represent two states, then other times that require a normal Boolean type require an adapter. Similarly to go the other direction. In some languages this might even interfere with implicit conversions that would be reasonably-expected to work. Likewise, your language or common tools might have built-in utilities for types known to it that will either be unavailable to your own types, or which you might at least have to wire-up -- things like serialization or finalization come to mind.

This is general advice, not language-specific, so the troubles with eschewing this guidance may be more or less so in your chosen language, but its good advice nonetheless.

throw table_exception("(? ???)? ? ???");

So then why is the author summing it up with?

I have no idea what you're actually replying to; you blithely copied and pasted a block of text without explaining it.

I searched on Google for the derelict page that houses the grammatically incorrect and misspelled passage, and did a quick Ctrl+F for "OOP" or "object", and found no references to OOP. He never mentions it once, so I assume you're not arguing that point.

(Lie #1) Software is a platform
I blame the universities for this one. Academics like to remove as many variables from a problem as possible and try to solve things under "ideal" or completely general conditions. It's like old physicist jokes that go "We have made several simplifying assumptions... first, let each horse be a perfect rolling sphere..."

The reality is software is not a platform. You can't idealize the hardware. And the constants in the "Big-O notation" that are so often ignored, are often the parts that actually matter in reality (for example, memory performance.) You can't judge code in a vacuum. Hardware impacts data design. Data design impacts code choices. If you forget that, you have something that might work, but you aren't going to know if it's going to work well on the platform you're working with, with the data you actually have.

What he's saying is that you shouldn't take an algorithm, and try to adapt it to your problem; "software is not a platform" simply means that you shouldn't design it from a purely academic standpoint, without even a target machine in mind: the code isn't going to execute itself. As a clunky example, picture a brainless port of a game from Windows to a game console:

  • It fails to use the specialized hardware on the console, due to not being written to take advantage of it.
  • It performs poorly due to problems which weren't existent on the PC (LHS issues, different types of memory, different cache characteristics)
  • It can even crash outright! (unaligned reads where not allowed, running out of memory unexpectedly, deadlocks where there weren't before).
  • It likely fails to build without some rushed hacks...

Now, imagine that the Windows program that was ported verbatim in our example is a mathematical description of an algorithm (told you the analogy was clunky). If you implemented it blindly as instructed, you'll run into many problems; it may work, but it won't work well. You'll wind up spending too much time on how it can be solved, rather than finding what the best way to solve it actually is. If you started knowing your machine and its data, you might start out processing four elements at a time, rather than following a text-book example that does them linearly, for example.

As another example, consider someone drawing blueprints for a machine, but never actually knowing what materials will be used. Well, the machine works, because the blueprints were sound. However, the walls are thin and weak, because there weren't enough supporting braces, it emits a loud buzzing sound, due to the resonance of the metal, and if they knew that a really high quality metal would be used for the moving parts, they would have designed it so that it took advantage of this fact and moved at a faster rate, rather than assuming it would have broken the machine. In this case, the drafter's hypothetical materials and environment (his "platform") didn't match the reality of the situation, and resulted in sub-optimal performance.

Nowhere does that really criticize any coding style in particular, just saying that one should start out knowing what tools are at hand and what pitfalls to avoid, rather than implementing a textbook reference of an algorithm (or even the most common algorithm, instead of a specialized one that would perform better, such as one that runs in parallel).

(Lie #2) Code should be designed around a model of the world

There is no value in code being some kind of model or map of an imaginary world. I don't know why this one is so compelling for some programmers, but it is extremely popular. If there's a rocket in the game, rest assured that there is a "Rocket" class (Assuming the code is C++) which contains data for exactly one rocket and does rockety stuff. With no regard at all for what data tranformation is really being done, or for the layout of the data. Or for that matter, without the basic understanding that where there's one thing, there's probably more than one.

Though there are a lot of performance penalties for this kind of design, the most significant one is that it doesn't scale. At all. One hundred rockets costs one hundred times as much as one rocket. And it's extremely likely it costs even more than that! Even to a non-programmer, that shouldn't make any sense. Economy of scale. If you have more of something, it should get cheaper, not more expensive. And the way to do that is to design the data properly and group things by similar transformations.

The "rocket" nonsense can be interpreted as a trade-off between arrays of structures vs. structures of arrays. If you iterate over 100 rocket objects and interact with them in turn, you will spend 100x the time of manipulating one rocket object. However, if they are all organized in cache friendly arrays, they can be more easily iterated through, maybe even in parallel! The importance is that with a change in thinking, it now costs less than 100x the resources to manipulate 100x the objects, and it scales a little better (handling each rocket in isolation vs. handling the rockets in a large batch).

Seeking to design software in the way we view the world, by over-classifying things that we do scientifically and academically in our lives, clashes with hardware design. This doesn't knock OOP, however. This merely suggest that you design your interface that doesn't force objects to be handled in isolation, when batching them together would be beneficial. Does having a class for every possible item in your inventory make sense? No. Neither does having a program that calculates a student's GPA from his transcript, and uses different functions depending on from which department the class was offered, despite the fact that the grading system is the same across the board, and calculations can be done in parallel on an array of numeric grades rather than iterating through each one in isolation. That doesn't require OOP, but you can shoot yourself in the foot the same way.

(Lie #3) Code is more important than data

This is the biggest lie of all. Programmers have spent untold billions of man-years writing about code, how to write it faster, better, prettier, etc. and at the end of the day, it's not that significant. Code is ephimiral and has no real intrinsic value. The algorithms certainly do, sure. But the code itself isn't worth all this time (and shelf space! - have you seen how many books there are on UML diagrams?). The code, the performance and the features hinge on one thing - the data. Bad data equals slow and crappy application. Writing a good engine means first and formost, understanding the data.

I'll draw an easy analogy: using XML to store all of your data. XML is great, it provides an elegant way to store human-readable data in a whitespace-insensitive format. Should you use XML to store image data? Probably not. Should you use XML to store a single field like a date? Probably not. Should you send your game's multiplayer state over the network in XML? Probably not.

XML works great, and it allows you to do a lot of things. However, using it on everything will result in slow, bloated, overly complicated code, and file sizes that grow to several times the size of the data. What he's trying to say is that having great designs that don't fit the data is far worse than having simplistic designs that are tuned to what they need to handle. This applies to any methodology: know what you need to do before you decide how to do it.

When OOP comes up however most of the time OOD is implied and that has killed many promising projects by over-engineering problems with complexity that was never there.

I'll say it again, I still don't see any of what you're ranting about in the articles that you're quoting, and I think you're doing him a disservice by quoting him out of context and using his lectures to spark a flame war. If you want to believe these things, that is perfectly fine. However, please don't convince the OP to close doors and potentially stunt his growth based on your own opinions.

TL;DR: "Use your head", is the net gain from the lecture. Not "don't use this tool".

This topic is closed to new replies.

Advertisement