• entries
2
44
• views
1358

11987 views

# Inspiration

This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras!

You can find Aras'  slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground.

I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations.
Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version!
TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too).

I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure:

1. Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules).
2. Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson).
3. Show that the relational model is a great fit for games (but call it "ECS").

This structure grinds my gears because:
(A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good,
but more importantly:
(B) it has the side effect of suppressing knowledge and unintentionally encouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers.

Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own.
I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind.

I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code.
OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code.

# Background

As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP":

• Abstraction
• Encapsulation
• Polymorphism
• Inheritance

I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them.  In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules...

• Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C".
• Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes).
• Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation.
• Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice
• Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them.
• Not included in the SOLID acronym, but I would argue is just as important is the:
Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required.

This gives us SOLID-C(++)

A few other notes:

• In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software.
• As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition.
• Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation.
• Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance.
• In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword.
• In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword.
• OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell!

And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation).

Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person!
Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first.
Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square?
This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance.
• If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool.
From this perspective, the following makes perfect sense:
struct Square { int width; }; struct Rectangle : Square { int height; };
A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle!
• As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever.
A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width".
By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle.
• Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width;
This will work correctly for squares (producing the sum of their areas), but will not work for rectangles.
Therefore, Rectangle violates the LSP rule.
• If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other.
• So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go!
• For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is:
struct Shape { virtual int area() const = 0; };
struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; };
struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; };
• "public virtual" means "implements" in Java. For use when implementing an interface.
• "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it.
• I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it!

TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time!

# Entity / Component frameworks

With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point.
Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional.

I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae

// -------------------------------------------------------------------------------------------------
// super simple "component system"

class GameObject;
class Component;

typedef std::vector<Component*> ComponentVector;
typedef std::vector<GameObject*> GameObjectVector;

// Component base class. Knows about the parent game object, and has some virtual methods.
class Component
{
public:
Component() : m_GameObject(nullptr) {}
virtual ~Component() {}

virtual void Start() {}
virtual void Update(double time, float deltaTime) {}

const GameObject& GetGameObject() const { return *m_GameObject; }
GameObject& GetGameObject() { return *m_GameObject; }
void SetGameObject(GameObject& go) { m_GameObject = &go; }
bool HasGameObject() const { return m_GameObject != nullptr; }

private:
GameObject* m_GameObject;
};

// Game object class. Has an array of components.
class GameObject
{
public:
GameObject(const std::string&& name) : m_Name(name) { }
~GameObject()
{
// game object owns the components; destroy them when deleting the game object
for (auto c : m_Components) delete c;
}

// get a component of type T, or null if it does not exist on this game object
template<typename T>
T* GetComponent()
{
for (auto i : m_Components)
{
T* c = dynamic_cast<T*>(i);
if (c != nullptr)
return c;
}
return nullptr;
}

// add a new component to this game object
{
assert(!c->HasGameObject());
c->SetGameObject(*this);
m_Components.emplace_back(c);
}

void Start() { for (auto c : m_Components) c->Start(); }
void Update(double time, float deltaTime) { for (auto c : m_Components) c->Update(time, deltaTime); }

private:
std::string m_Name;
ComponentVector m_Components;
};

// The "scene": array of game objects.
static GameObjectVector s_Objects;

// Finds all components of given type in the whole scene
template<typename T>
static ComponentVector FindAllComponentsOfType()
{
ComponentVector res;
for (auto go : s_Objects)
{
T* c = go->GetComponent<T>();
if (c != nullptr)
res.emplace_back(c);
}
return res;
}

// Find one component of given type in the scene (returns first found one)
template<typename T>
static T* FindOfType()
{
for (auto go : s_Objects)
{
T* c = go->GetComponent<T>();
if (c != nullptr)
return c;
}
return nullptr;
}

Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead.

What does this code do? Well, nothing good

To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x.

What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules:

• the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components".
• GameObjects fulfill the service locator pattern -  they can be queried for a child component by type.
• Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject.
• Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects).
• A GameObject may only have one component of each type (some frameworks enforced this, others did not).
• Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update".
• GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components).

This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today.

However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework!

Let's evaluate this code according to OOD:

• GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells.
• GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games...
• I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect.
• Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation.
• The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use.
However, it's not great as DIP -- many of the components do have direct knowledge of each other.

So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow.

# Frameworkless composition (AKA using the features of the #*@!ing programming language)

If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job.

Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c
Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp
Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp

The gist of the changes is:

• Removing ": public Component" from each component type.
• I add a constructor to each component type.
• OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized.
• I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent.
• I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes.
• Fix the "virtual void Update" anti-pattern.
• Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction.

## The objects

So, instead of this "VM" code:

    // create regular objects that move
for (auto i = 0; i < kObjectCount; ++i)
{
GameObject* go = new GameObject("object");

// position it within world bounds
PositionComponent* pos = new PositionComponent();
pos->x = RandomFloat(bounds->xMin, bounds->xMax);
pos->y = RandomFloat(bounds->yMin, bounds->yMax);

// setup a sprite for it (random sprite index from first 5), and initial white color
SpriteComponent* sprite = new SpriteComponent();
sprite->colorR = 1.0f;
sprite->colorG = 1.0f;
sprite->colorB = 1.0f;
sprite->spriteIndex = rand() % 5;
sprite->scale = 1.0f;

// make it move
MoveComponent* move = new MoveComponent(0.5f, 0.7f);

// make it avoid the bubble things
AvoidComponent* avoid = new AvoidComponent();

s_Objects.emplace_back(go);
}

We now have this normal C++ code:

struct RegularObject
{
PositionComponent pos;
SpriteComponent sprite;
MoveComponent move;
AvoidComponent avoid;

RegularObject(const WorldBoundsComponent& bounds)
: move(0.5f, 0.7f)
// position it within world bounds
, pos(RandomFloat(bounds.xMin, bounds.xMax),
RandomFloat(bounds.yMin, bounds.yMax))
// setup a sprite for it (random sprite index from first 5), and initial white color
, sprite(1.0f,
1.0f,
1.0f,
rand() % 5,
1.0f)
{
}
};

...

// create regular objects that move
regularObject.reserve(kObjectCount);
for (auto i = 0; i < kObjectCount; ++i)
regularObject.emplace_back(bounds);

## The algorithms

Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just:

    // go through all objects
for (auto go : s_Objects)
{
// Update all their components
go->Update(time, deltaTime);

You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire.

Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits)

	// Update all positions
for (auto& go : s_game->regularObject)
{
UpdatePosition(deltaTime, go, s_game->bounds.wb);
}
for (auto& go : s_game->avoidThis)
{
UpdatePosition(deltaTime, go, s_game->bounds.wb);
}

// Resolve all collisions
for (auto& go : s_game->regularObject)
{
ResolveCollisions(deltaTime, go, s_game->avoidThis);
}

The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series.

# Performance

There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)!

# Next steps

There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when...

11 minutes ago, Oberon_Command said:

I think the term you're looking for is "leaky abstraction."

Close, but not quite. This is from the perspective of the user of an abstract interface, whereas what I'm thinking about is like, for example (hypothetical), there are two polygon-polygon intersection tests, one is faster but "rounds the corners" a bit, and the other is exact, and the dependency on the other one would come from a game situation where a projectile was moving near a corner, where the algorithm would be expected to miss the intersection, so it could not be replaced without destabilizing the balance of the game. It is something of a butterfly effect, and somewhat related to bugs turning into features after shipping.

21 minutes ago, Oberon_Command said:

I am of the opinion that the search for advice that we apply unquestioningly is fundamentally misguided.

I guess I've managed to create the wrong impression there.  I am not looking for such advice, it tends to find me itself all the time, which does not seem like how things should be. That said, I don't want to make this about me, it just seemed like once again here comes someone with the one true way to do things.

23 minutes ago, Oberon_Command said:

There's a reason we call them principles and not laws or dogma.

Dogma is merely an authoritative principle. Which means it just takes someone (a famous computer scientist / programmer) or something (Wikipedia) to "authorize" it, or make it trusted, and by that definition all CS "principles" I've encountered are really dogma (although the difference isn't that huge to begin with). And principle is defined as fundamental truth or assumption, which again I would argue these examples are not.

Posted (edited)

1 hour ago, snake5 said:

That said, I don't want to make this about me, it just seemed like once again here comes someone with the one true way to do things.

Ah, I can empathize there.  I don't think that's Hodgman's intent, at all - it seems to me that he's more claiming that the "OOP" Aras was referring to is a strawman, not saying that SOLID-C is the one true way to code.

1 hour ago, snake5 said:

Dogma is merely an authoritative principle. Which means it just takes someone (a famous computer scientist / programmer) or something (Wikipedia) to "authorize" it, or make it trusted, and by that definition all CS "principles" I've encountered are really dogma (although the difference isn't that huge to begin with). And principle is defined as fundamental truth or assumption, which again I would argue these examples are not.

Hmmm, I've been using both of those words differently. To me a principle is a kind of "normative axiom", not something that has a truth value. The SRP isn't a thing that's "true" so much as it's an idea that we choose to follow because we believe it will yield better software; more generally, principles are norms that we follow in order to promote our values. I will happily ignore a principle if I find I'm in a case where the principle doesn't apply or doesn't advance my values. If I find that a principle ceases to advance my values in most cases- or never did so in the first place - then I tend to discard it.

To me what makes a principle "dogma" is how it is applied, not an inherent characteristic of the principle. "Dogma" refers to principles that are applied unquestioningly and universally, oftentimes without really understanding why one applies them in a particular circumstance. I think that covers the case where an "authority" impose a principle on someone who doesn't understand the principle, too.

SOLID-C can certainly be applied dogmatically - because any principle can be.

Edited by Oberon_Command

I tend to think of the runtime-configurable design in the "bad OOP" example as a compromise between the flexibility of a true scripting language and the performance of the hardcoded "good OOD" approach.  Like many compromises, it feels unsatisfactory because it cannot fully deliver on either of its promises.  That doesn't necessarily make it a bad design - it is actually quite successful as a pattern for those cases where a true scripting language is too expensive and the hardcoded approach is too rigid.

(Which is not to say that a better compromise is not possible.  I just find the argument that "rigid approach A is better than flexible approach B because it is faster" unconvincing.  Flexibility is a value in and of itself that is often more important than performance.)

Glad to see someone writing a thorough and detailed defense of OO with a concrete example. I definitely agree that much of OO criticism stems from what we would call "bad OO" - either a mistaken understanding of how to effectively use OO or a bad experience with others' OO code.

At the same time, I can't help but wonder if the reality of the situation is that, for the average software dev, OO increases the likelihood of writing bad code. Sure, there are plenty of good OO devs who write good OO code, but maybe they are more the exception than the rule. Maybe there is some alternate methodology which tends to work out better for the people who don't currently write good OO code. Maybe the way OO was taught poisoned people's minds such that they cannot easily get out of their conception of it. Should we focus on helping those people to write better OO, or should we give up trying to teach them OO and let them use some other methodology? In any case, I have to think that, due to the long-standing history of practice of OO, we are better served trying to teach people to do it better and dispell common misconceptions rather than throwing it out entirely (of course, only if the methodology makes sense for the language / framework).

7 hours ago, Oberon_Command said:
9 hours ago, snake5 said:

it just seemed like once again here comes someone with the one true way to do things.

Ah, I can empathize there.  I don't think that's Hodgman's intent, at all - it seems to me that he's more claiming that the "OOP" Aras was referring to is a strawman, not saying that SOLID-C is the one true way to code﻿﻿﻿

Yeah I don't mean to come across that way, though I will admit to being a bit of a snarky dick.
On my game we use a blend of DOD and OOD and the relational model, procedural and pure functional, message passing and shared state, immutable objects and mutable objects, stateless APIs and state machines, etc... It's good to have a lot of tools, and it's good to have a lot of theory and guidelines on how to use each of those tools, too.

The point was taking some code that's already been shown to be bad and is being presented as a patsy for OOP being harmful, and showing that applying a few guidelines from OO theory can actually remove the badness. So people shouldn't throw the baby out with the bathwater, and instead should practice their OO skills.

If you take SOLID(C) as a set of guidelines to use along with your own critical thought, then they shouldn't be controversial.

11 hours ago, snake5 said:

The﻿ question I ask is, should you split that class? I don't question the ability, as you have defined it, but the productivity of always doing so. Say you have a "human" component with "health" member. Does it m﻿ake sense to move "health" to a separate component? Even if there is no other component that would ne﻿ed﻿ i﻿t﻿﻿? Likewise, multiply-add are clearly two separate operations, but are joined for performance reas﻿o﻿ns.﻿

MAD doesn't count because it's not a class and shouldn't be a class

Health is actually a good example. An OOP beginner might make a private health number field, and then have SetHealth and GetHealth accessors, in case they want to change the logic later... That's bad - it's encapsulation without abstraction. To apply a more general principle of KISS, you shouldn't have that class at all and just represent health as a raw number...

However, if there's actually a useful abstraction that can be applied, then a class can be worthwhile (more so if it's used in many places - DRY, but less so if it's only required in one place - KISS). For example, instead of Set/Get, maybe you have ApplyDamage(amount, type) and ApplyHealing(amount, type), which internally do some calculations including armour, spells, buffs, etc... Encapsulating the health field in this case makes it easier to reason about bugs with that field - you know which algorithms are responsible for mutations.

I think we agree though that you should balance multiple contradictory principles with your own critical thought though. IMHO, the KISS principle should often override other ones

6 hours ago, chairbender said:

At the same time, I can't help but wonder if the reality of the situation is that, for the average software dev, OO increases the likelihood of writing bad code. Sure﻿, there are plenty of good OO devs who write good OO code, but maybe they are more the exception than the rule. Maybe there is some alternate methodology which tends to work out﻿ better for the people who don't currently write good OO﻿ code.

Well that's what game engines are doing

EC and ECS frameworks add a whole bunch of unnecessary restrictions to your designs that stop you falling into pitfalls, but also hamstring "normal" designs... EC is basically a restricted form of OO, and ECS is a restricted form of relational.

On 10/9/2018 at 1:30 AM, Hodgman said:

It's also good C++ advice to liberally use free-functions﻿ (in the style of systems!!) when possible. If it's possible to implement a procedure as a free-function instead of a member function (i.e. the algorithm only﻿ depends on the public interface), then you should use a free-function. Java and C# got this terribly wrong when they excluded free functions from their design and forced everything to be a member.﻿﻿﻿

Isn't it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into the habit of starting to implement every member function as static, and if I discover that using the interface is sufficient, I reevaluate its placement - otherwise I turn it into an ordinary instance member function.

5 hours ago, SuperVGA said:

Isn't it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into the habit of starting to implement every member function as static, and if I discover that using the interface is sufficient, I reevaluate its placement - otherwise I turn it into an ordinary instance member function.

Well, in C++ in particular you'd want to avoid static functions in classes, as well, unless those functions needed access to the private fields of the class. There are a couple of reasons for that:

1. Minimizing compile times. Having a free function doesn't inherently lead to better compile times, but it does give you the option of moving that function out into a different header from the class with a forward declaration of that class, which allows code that uses the functions to avoid including the class header. That means that when you change the class definition, fewer translation units will need to be recompiled.
2. Promoting encapsulation. The more functionality is in free functions that use the public interface of a class, the less code you need to change when the class's private implementation does, at least in theory. In a lot of cases you can get it so that code depends on the free functions instead of the class itself. This also encourages one to focus a class's public interface on its state invariants, rather than putting significant amounts of behaviour in the class itself.

15 hours ago, Oberon_Command said:

Well, in C++ in particular you'd want to avoid static functions in classes, as well, unless those functions needed access to the private fields of the class. There are a couple of reasons for that:

1. Minimizing compile times. Having a free function doesn't inherently lead to better compile times, but it does give you the option of moving that function out into a different header from the class with a forward declaration of that class, which allows code that uses the functions to avoid including the class header. That means that when you change the class definition, fewer translation units will need to be recompiled.
2. Promoting encapsulation. The more functionality is in free functions that use the public interface of a class, the less code you need to change when the class's private implementation does, at least in theory. In a lot of cases you can get it so that code depends on the free functions instead of the class itself. This also encourages one to focus a class's public interface on its state invariants, rather than putting significant amounts of behaviour in the class itself.

Thanks, aside from the organizational advantages I never considered the build-time benefits of this before.

Posted (edited)

On 10/9/2018 at 9:08 PM, chairbender said:

In any case, I have to think that, due to the long-standing history of practice of OO, we are better served trying to teach people to do it better and dispell common misconceptions rather than throwing it out entirely (of course, only if the methodology makes sense for the language / framework).

IMO the paradigm doesn't matter that much.  The programmer matters. Half the stuff in any paradigm is trying to protect the programmer from themselves.  For 25+ years I worked in the semi-conductor industry writing internal tools.  Initially almost everyone in our department had a PhD in physics but since we were a CAD department they also had to know how to program. I started out as a software tech but worked my way up.  I was a rare exception. Some of the PhDs were good programmers, some not so much.  Later when we stared hiring CS grads the situation really didn't change, some were a lot better than others.

What I mean by good, is they could debug their own code and find their mistakes and their code was pretty reliable.  There wasn't a lot of focus on programming patters at first, mostly algorithms and the data structures to support them. Some guys on the other hand were just lazy and not meticulous at all, and those people I would have to assist on a daily bases.  This was all functional programming at first, in mostly FORTRAN 77 and C and in my case a lot of ASM.

They made basic errors like leaving hundreds of lines of warning messages in their code when compiled. It's true that most of these were innocuous but the problem is when you have all these warnings fly by during a compile, the few that mean something you miss. The first thing I always did when helping someone, is have them fix every warning in their code. Sometimes this actually fixed their bug.

The other thing is they were often lazy with arrays bound checking and stuff like that. One of the strangest bugs I ever saw was when someone wrote off the end of a stack allocated array and the data they wrote just happened to also be an valid address in the code. When the function returned it just followed the new address and actually called other functions and ran for a while from there, before eventually crashing because the state was wrong. The program was huge, and the bug only occurred after several minutes of running and we didn't have all the fancy debugging tools at that time.  It messed with me for a couple of days, until on a hunch, I just started searching for local arrays and put bounds checks in.

Another common thing was for people to leave unexplained behavior in they code.  They would call me over to help fix something and then when I pointed out something else seemed off, they would say "yeah it does that. I'll fix that later, it's not important. This other bug is what I need to fix right now.".  I don't think I need to explain why this is bad.

In any case when we moved on to C++, Scheme, Java, OOP, what have you , it was still these same people that had the same kinds of problems. It didn't matter what they were dong, what language they were using, or what paradigm. Nothing really changed for them.  One of these guys was a huge Lisp proponent, loved Emacs (and would bash me for using vi) and talked about design patterns and paradigms ad nauseam.  He was one of the top two offenders.

I guess this has kind of poisoned me to programming holy wars.  I'm not saying there isn't value in comparing paradigms, but I just think at the end of the day it's not the major factor.

Edited by Gnollrunner

23 hours ago, SuperVGA said:

Isn't﻿ it still good C++ practice though to keep free functions in a namespace or even within a class, as static functions?
I have gotten into﻿ the habit of starting to implement every member function as static, and if I discover that using th﻿e interface is sufficient, I reevaluate its placement - ot﻿﻿h﻿erwise I turn it into an ordinary instance member function﻿﻿﻿﻿﻿﻿﻿﻿.

Yeah I don't do it regularly, but I've found that by making most methods static to start with, it helps me keep track of which member variables are read/written at which time, what the data flows are, and think about the structure... As mentioned in the other reply you got, free functions are preferable to class-static as it keeps encapsulation intact.

For complex classes, I do like writing the implementation mostly as a small collection of stateless free functions in the style of traditional "Inputs -> Process -> Outputs" nodes, and then write the actual class implementation as a very thin wrapper around those pure functions - plugging in the right members to the input/output arguments. In some situations these pure functions might be reusable by other classes too, in which case I'd declare them in a header - otherwise I'll make them file-static / hidden.

Posted (edited)

On 10/11/2018 at 12:20 AM, Gnollrunner said:

Another common thing was for people to leave unexplained behavior in they code.  They would call me over to help fix something and then when I pointed out something else seemed off, they would say "yeah it does that. I'll fix that later, it's not important. This other bug is what I need to fix right now.".  I don't think I need to explain why this is bad.

I can think of at least a couple of arguments in favour of that attitude.

Typically we want individual commits to source control to represent one bugfix or feature. The more stuff you put in a commit or changelist, the higher the likelyhood that your commit will break something and need to be reverted and, if that happens, the more work will be potentially unavailable to the client when your change is reverted. There's also that if you find two problems in the code while you're looking at a bug, and you fix both at the same moment, it becomes harder looking back through the code's version history to see what the actual problem was that caused the bug. When you're fixing a bug, I find it really helps to have a stable (even if flawed) codebase to test your fixes against. If you change one thing at a time, you get a better sense of what your change did than if you change a whole bunch of things all at once.

Fixing stuff as you see it is all well and good, but if it isn't pathologically difficult to test and submit individual bugfixes, then bugfixes should be as separate from one another as possible. Easy branching in source control systems like git make this a lot nicer, but not everyone is using git. Structuring your code well can certainly help in all cases.

Edited by Oberon_Command

Ugh this is great! Thank you! I've recently been "promoted" to Software Architect at work (web dev) and as part of that I've been diving into proper coding techniques so I can clean up the mess that we've been building for a few years. I stumbled upon this article despite not being an actual game dev (but have always wanted to, just never figured out the proper way to do things).  Between this blog post and reading gameprogrammingpatterns I finally have had the "A-ha!" moment where things start to make sense!

Any idea when the next parts of the series are going to be happening? I keep checking back every day hoping (unrealistically) that you have posted more knowledge to share

11 hours ago, AnarchoYeasty said:

Any idea﻿ when﻿ the next parts of the series are﻿﻿ going to be happening? I﻿ keep checking back every﻿ day﻿ hoping﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿ (unrealistically) that you have posted more﻿﻿﻿﻿﻿﻿

I'm getting my indie game ready to exhibit at PAX at the end of October, so any free would-be-blog-writing-time is probably going to get eaten up by shader-code-polishing instead until then

On 10/12/2018 at 7:36 PM, Oberon_Command said:

I can think of at least a couple of arguments in favour of that attitude.

Typically we want individual commits to source control to represent one bugfix or feature. The more stuff you put in a commit or changelist, the higher the likelyhood that your commit will break something and need to be reverted and, if that happens, the more work will be potentially unavailable to the client when your change is reverted. There's also that if you find two problems in the code while you're looking at a bug, and you fix both at the same moment, it becomes harder looking back through the code's version history to see what the actual problem was that caused the bug. When you're fixing a bug, I find it really helps to have a stable (even if flawed) codebase to test your fixes against. If you change one thing at a time, you get a better sense of what your change did than if you change a whole bunch of things all at once.

Fixing stuff as you see it is all well and good, but if it isn't pathologically difficult to test and submit individual bugfixes, then bugfixes should be as separate from one another as possible. Easy branching in source control systems like git make this a lot nicer, but not everyone is using git. Structuring your code well can certainly help in all cases.

I think there is a difference between a bug, one for which the cause is understood and "unexplained behavior".  I used that combination of words intentionally.  IMHO leaving the latter in your code is asking for trouble. I've seen people bitten by this many times.  If I don't understand why something is happening I go find out.  I have even had occasions where I seemingly fixed something but didn't understand why my change had fixed the problem. In that case I will go put the bug back in and trace it until I understand why it occurred, and if my change really did fix it or if it simply masked some manifestation of the bug.

I'm a firm believer in understanding your code as much as possible.  I don't like to tell someone how to do their job, but on the other hand if they are asking for my help, I refuse to waste my time chasing a possible ghost. If someone can't fix their own code and need me to help them, then I'm in command. If they don't like it, they can find someone else to help them, however in reality I've never had any push-back on that as people tend to appreciate when you are spending your time helping them.

6 hours ago, Hodgman said:

I'm getting my indie game ready to exhibit at PAX at the end of October, so any free would-be-blog-writing-time is probably going to get eaten up by shader-code-polishing instead until then

I'll be heading to PAX, will have to give the game a run... will see how polished your shaders look in person.

## Create an account

Register a new account

• ### Similar Content

• By Werem
I'm 20 yrs old and I already have a full time job as a technology programmer in AAA company, I'm completely self taught and I love what I'm doing, the problem is that I just don't like college, I already tried to go to university and because I've put too much pressure on myself during high school, when I started living by myself for the first time I started partying too much, now I spend at least 8 hrs a day (usualy more) at work and when I come back I study a lot, recently I started part time college and even tho I have classes only every 2nd weekend I feel like having just 4 free days a month is not enough and my youth is just disappearing, I'm ambitious but I've never felt like a degree is rly worth it and I don't want to wake up in few years feeling like I chased career too much and I didn't have any fun time, do you think I should force myself anyway and get a degree from some private, paid university that didn't teach me anything useful or something that I wouldn't teach myself on my own, or having years of experience and huge shipped titles in my cv will be enough if I'll ever want to change job for whatever reason(current company is very very good, growing rapidly and I see myself staying there for a lot of years)
• By Novakin
Looking for a c++ pogrammer to help us on a Viking battle game. We are using unreal engine 4 so knowledge of blueprint would be handy. The project is intended to sell commercially so you will recieve revenue shares. For more info on the project please contact me. Thnak you

• I'm looking for a open source PBR rendering engine that I can use.  Basic requirements are Windows (C/C++) and free to use type license.

The first two hits I get on Google are:

Filament

LuxCoreRender
https://luxcorerender.org/

Does anybody have any experience using any of these, or do you recommend something else that's better?
Thanks.

• By Josheir
This is a follow up to a previous post.  MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help.
I have put the class in the main .cpp to simplify for your debugging purposes.  My error is :
I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is :
https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ

The branch is : combine_sources
The Commit ID is: a4eaf31
glad1.cpp was also in my project, I removed it to try to solve this problem.

Here is the description of the problem at hand:
Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h.  When glew is added there is only a runtime error (that is shown above.)

I could really use some exact help.  You know like, "remove the include for gl.h on lines 50, 65, and 80.  Then delete the code at line 80 that states..."

I hope that this is not to much to ask for, I really want to win at OpenGL.  If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted.

Josheir

• Hello!
I'm trying to understand how to load models with Assimp. Well learning how to use this library isn't that hard, the thing is how to use the data. From what I understand so far, each model consists of several meshes which you can render individually in order to get the final result (the model). Also from what assimp says:
One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node The only thing that confuses me is how to create the shader that will use these data to draw a mesh. Lets say I have all the information about a mesh like this:
class Meshe { std::vector<Texture> diffuse_textures; std::vector<Texture> specular_textures; std::vector<Vertex> vertices; std::vector<unsigned int> indices; } And lets make the simplest shaders:

#version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform vec3 model; uniform vec3 view; uniform vec3 projection; out vec2 TextureCoordinate; out vec3 Normals; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TextureCoordinate = aTexCoord Normals = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader:
#version 330 core out vec4 Output; in vec2 TextureCoordinate; in vec3 Normals; uniform sampler2D diffuse; uniform sampler2D specular; void main() { Output = texture(diffuse, TextureCoordinate); }
Will this work? I mean, assimp says that each mesh has only one material that covers it, but that material how many diffuse and specular textures can it have? Does it makes sense for a material to have more than one diffuse or more that one specular textures?  If each material has only two textures, one for the diffuse and one for the specular then its easy, i'm using the specular texture on the lighting calculations and the diffuse on the actual output.
But what happens if the textures are more? How am i defining them on the fragment shader without knowing the actual number? Also how do i use them?
×