Yet another global variable debate

Started by
12 comments, last by SmkViper 9 years, 3 months ago

Hey guys,

I have recently been reading Jason Gregory's book, Game Engine Architecture 2nd Edition. And the first thing I noticed is that in the first chapter about engine systems he uses alot of global variables. This seems to be pretty common in production code.

His approche is something like this:


class RenderEngine {
    RenderEngine() {
        // Do nothing
    }

    ~RenderEngine() {
        // Do nothing
    }

    void startUp() {
        // Init here
    }

    void shutDown() {
        // Free here
    }
};

RenderEngine gRenderEngine;

// ...

int main() {
    gRenderEngine.startUp();
    runGameLoop();
    gRenderEngine.shutDown();
    return 0;
}

I don't really understand why they choose to do it like this instead of making use of dependency injection. Maybe somebody with more knowledge can explain? smile.png

Advertisement
I've not read the book, but is it possible this is being used to shorten the examples, or is it actually recommended in the main text?

This pattern is used a lot, whether its a global variable or a singleton, the idea for access is the same. Usually its done to make the components you need to access easier, however they always are hidden dependencies. I dont use this pattern in my own code, I only have one global variable called application, it spawns all the rest of the systems and uses dependency injection to pass a resource object around that has pointers to all the systems.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

I used to write a lot of code like that. I am thinking of my chess engine, where the board, the hash tables and the opening book all were global objects. These days I would rather not do that. There is an object of class Engine that contains all those things, and there is an instance of Engine as a local variable in main. This allows me to do things like start up two engines in the same process and match them against each other, or verify that they both behave the same way, if I am testing a modification that should be a no-op.

IMHO I like this approach for its simplicity and easier access of my lower-level engine layers, and since I'm not using RAII, I'll have to call Module::Init(), Module::ShutDown(), etc. anyway. But I think as long you make sure your sub-systems has its life-cycle — initialization, usability, resources release — well defined, this shouldn't be a problem.

Ps: in my case, I use just static classes, not singletons or extern globals.

There's really no debate to be had, that global state is bad is very much agreed upon and well-quantified in the ways that it is bad. Still, its a convenience (I daresay crutch) that some programmers and/or program architectures rely upon.

Some of the problems are:

  • Any sub-system can modify any global state that it can see. This makes stringent testing effectively impossible, because the only practical way to "test" anything in such a system is to begin by presuming that certain subsystems do not modify it even though they can see it. This is obviously less reliable than ensuring that such a subsystem cannot modify it through proper dependency management.
  • Any sub-system can read any global state that it can see. This makes dependency management a matter of convention by fallible programmers, rather than something they are aided in by the compiler. What's more, dependencies that develop often go unseen by the inattentive programmer, and the web of dependencies can lead to coupling of the whole system in the worst case.
  • When state is shared through global means, there is no good way to delegate different levels of access without relying on subverting the type system -- You might think that you could protect some global state by marking it const, but what if another subsystem really does need write access to that state? You must either leave it non-const, or mark it const and then subvert the type system to initialize it (if you cannot do so when it is declared). When state is passed via parameters, the subsystem can promise not to modify the state by taking a const reference to the potentially non-const data.

The only arguments in favor of global state are convenience and laziness, because it saves you the appearance of having to think about any of the things above and the myriad other issues global state can cause. Of course, by not thinking about it, it then becomes unnecessarily hard to reason about bugs, so whatever you gain in the convenience of writing code is not infrequently swallowed by the difficulties you gain in debugging it.

That being said, its sometimes an evil that can be (in small programs) or must be tolerated -- for example, in a large program, where significant new features must be added without changing the interfaces to the abstraction layers in between (e.g. you cannot add dependency injection) because you must preserve compatibility for whatever reason, you might create a global object so that you can initialize it in an outer scope, but then use it in an inner scope. But even still, just because scenarios like this can justify it ex-post-facto, it doesn't endorse its use as a starting point. You'll often see things like loggers touted as a justifiable global, and the truth of the matter is a whole debate, but its not hard to see that the status quo likely emerged for the reason above -- simply that no one had thought to build logging in, and so it was added after the fact.

throw table_exception("(? ???)? ? ???");

Thanks for the long and very informative answer, Ravyne!

I've not read the book, but is it possible this is being used to shorten the examples, or is it actually recommended in the main text?

He doesn't explicitly recommend it, but it is presented as the solution to making sure that systems are initialized in a specific order. So it's kinda recommended, yeah. It just seemed odd to me that this is the presented solution, because I expected more "Best-Practice-Code" from a seasoned professional.

My first thought for a solution would have been to just wrap everything in a class and rely on RAII for the correct ordering. Then the systems would just be passed to the objects that need them.


class RenderEngine {
    RenderEngine() {
        // Init here
    }

    ~RenderEngine() {
        // Free here
    }
};

class Application {
   void RunGameLoop();

   // Declare systems in correct order if needed
   RenderEngine renderEngine;
   AudioEngine audioEngine;
   ...
};

int main() {
    Application app;
    app.RunGameLoop();
    return 0;
}

He doesn't explicitly recommend it, but it is presented as the solution to making sure that systems are initialized in a specific order. So it's kinda recommended, yeah. It just seemed odd to me that this is the presented solution, because I expected more "Best-Practice-Code" from a seasoned professional.



My first thought for a solution would have been to just wrap everything in a class and rely on RAII for the correct ordering. Then the systems would just be passed to the objects that need them.

I'd do it pretty much this way. Relying on the method presented in the book to "make sure initialization order is correct" sounds like a bullshit reason, because constructors/destructors already take care of that. We are living in 2014, there is no reason not to use constructors/destructors to handle initialization. Explicit init/uninit methods are mostly a relic from C/early C++, and are often used by programmers just for historical or hyperficial reasons.


it is presented as the solution to making sure that systems are initialized in a specific order

Isn't using globals actually the reason why you can't force the initilization order? If they must be initialized in a certain order then make an explicit dependency, make sure the second system needs the first system, set it as a parameter in the constructor. This way you can't write code that mix them, you can't create the second system without a reference of the first.

Your solution with the systems declared in a certain order as private members of an enclosing class is error prone too, anyone can mix the list and you'll spend a lot of time tracking that error. The mixed code looks like working code, so the programmer fixing the new bug wouldn't look at it at first. With that approach you can only comment a lot and expect the comments to be updated if anything changes, and expect everyone to read the comments.

Your solution with the systems declared in a certain order as private members of an enclosing class is error prone too, anyone can mix the list and you'll spend a lot of time tracking that error. The mixed code looks like working code, so the programmer fixing the new bug wouldn't look at it at first. With that approach you can only comment a lot and expect the comments to be updated if anything changes, and expect everyone to read the comments.

If you don't have any globals, then any dependencies must be passed. Thus, if the order of construction is important, it should be obvious from the need to pass Object A to Object B's constructor.

Yes, a careless programmer might do something silly, but overall this is quite explicit - I would say more so than the order of init / destroy calls in main(), particularly where there is global access it could be easy to add a new dependency (e.g. reference the RenderEngine in a helper function that is ultimately called as part of initialising some other part of the system).

It is a pity that C++ has the surprising behaviour of executing initialiser lists in the order the members were declared, rather than that specified in the initialiser list. While there is a reason for this, it is not obvious, it would be less error prone if C++ mandated that the order of the initialiser list must match the member declaration order to avoid such issues.

This topic is closed to new replies.

Advertisement