One-Step vs Two-Step Initialization (C++)

Started by
32 comments, last by Matias Goldberg 11 years, 9 months ago
I switched to one step init using constructors a couple of years ago and never looked back. I don't use exceptions (in C++) because in my games if I need something it means I really need it to keep the game going so if something goes bad I just crash with a call stack log and go fix the bug. The rare things that might fail and can be recovered (ie, an optional data file that is not there) get pre-tested then constructed if the situation allow them to be constructed.
But the rule is, right after an object is constructed it has to be ready to go and fully initialised... I have the feeling this has incredibly decreased the amount of runtime crashes in my software because construction order and object relationships are much more well planned.. no init( bla, bla) , no setMyImportantPointer( bla* bla ).. if one object needs another one to be fully constructed the relationship is explicitly expressed in the constructor.. so everything is forced to be top-down.

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni

Advertisement
Consider the C++ standard library. With a std::fstream you can specify a file name as constructor argument, but if the file isn't opened, it doesn't throw an exception. In comparison, if any of the std::string constructors fail an exception is thrown.


That was not a design decision, it was because the iostreams was written before exceptions were added to C++. You can enable exceptions in iostreams by calling the istream::exceptions() or ostream::exceptions() method.

And I wouldn't refer to the iostreams library for any kind of design advice. It's not exactly a shining example of C++ engineering.
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
All my classes are initialized in their constructors and destroyed through their destructors.

I've worked on countless large-scale projects and claiming that some objects "need" two-stage construction because they are too complex just means you're putting too much stuff in a single class. One class, one purpose. Using methods like Init() and Shutdown() is just sloppy style and could be avoided by properly designing your object model. Yet for some reason some C++ programmers appear to be scared of adding classes - usually with excuses like overhead, performance, binary size and whatnot.

C++ classes are designed to be usable without overhead. You can declare a struct with some methods and an std::uint16_t in it and it will have a size of 2 bytes. If you stack-allocate it's the same as if the code was implemented in its owner class. That's why there's really no point to writing silly classes like CGraphics with InitWindow(), InitD3D() and stuff like that.


Same goes for exceptions. I don't do error handling without exceptions. That includes Windows, the Xbox 360, Android (Crystax NDK) and WinRT (Win8+tablets+phone).

There aren't only those cases where someone mistyped an asset filename (and even then exceptions would be the appropriate choice: the OpenFile() method can't resolve the error, so it goes up the call stack - the LoadImage() method can't resolve it either, so up it goes again - the ReadAssets() method finally could catch it, log the error and use a pink placeholder asset). Back to the paragraph's opening line, there are tons of other cases where errors can occur and you can't do a thing. Failed to initialize Direct3D. No compatible graphics adapter. Unsupported pixel format. Swap chain resize failed. And of course all those little API calls that usually work, but where due diligence requires us to check that they really did their job.

The point many C++ programmers don't get is that exceptions aren't fatal, they merely indicate that the current scope can't reasonably deal with the error. Yes, there's the concept of exception safety, forcing you to employ RAII. If you ignore it, funny things may happen. Without exceptions, there's the risk of forgetting to check result codes, forcing you to make your code unreadable by littering it with error checks. If you ignore it, funny things may happen, too. Given the decision between tedious result code checking with unreadable code and equally tedious RAII programming with nice code, guess what I'll pick.
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
Forgetting about the exception part "code safe" vs "fast code & stable" phylosophy & practices being discussed, I would like to raise a point related with building an actual game:

Your game will eventually get to the point where a level is restarted (may be because the player died), or a full game session was restarted. Sometimes the one-step approach works fine, because objects from the previous level/session get destroyed and recreated.
In fact I use this a lot (minus exceptions, I don't use them)

But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.
For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).
I also use this two-step approach when suitable.
Furthermore the multi-step initialization design adapts easier for multithreading (leave the concurrent part in one or more passes, execute the serial code in another pass).

Just food for thought.

That was not a design decision, it was because the iostreams was written before exceptions were added to C++.

C++ went through a process of standardization where features were added to the language and things like the CFront library designs were changed in response to those feature changes. The new versions of the standard library were given extensionless header names and the old CFront style implementations continued to be available for a while from most compilers in the form of the .h header files. Ex: iostream vs. iostream.h, fstream vs. fstream.h, etc. There are actually some interesting differences in the interfaces between the old iostream library and the newer iostream library, such as the removal of the noreplace and nocreate flags. Another is that the old-style iostream library gave istream and ostream protected constructors. To extend the library you would derive from istream or ostream, and in your constructor use the default constructor for the base class and call the init() function from the base class. Whether or not you believe the interface of the iostream portion of the standard library to be a good design decision, it was nonetheless an actual design decision.
But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.
For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).
I also use this two-step approach when suitable.


And that is, in my opinion, where your design went bad. It only seems that the two-step approach is "much better suited" because earlier on, you failed to separate those parts of your game object that survive a map change from those that are map specific.

By knitting them both into the same class, you created a weird amalgamation which will cause raised eyebrows in many situations: if you want to save the game's state, you would have to carfully check what is permanent (= you save it) and what is level-specific (= don't save it, but pull it from somewhere after the object was loaded). If the map is unloaded, only a subset of the methods of those game objects may be used (= be careful to document which methods you may use when displaying eg. player stats in the menu without a loaded map).

Whether you implement such two-stage initialization with Init()/Shutdown() or Reinit() are just details - it's both two-stage initialization. Code duplication would be avoidable in both cases.
Professional C++ and .NET developer trying to break into indie game development.
Follow my progress: http://blog.nuclex-games.com/ or Twitter - Topics: Ogre3D, Blender, game architecture tips & code snippets.
For functionality such as level resets, this makes me think of functions like [font=courier new,courier,monospace]std::vector<T>::clear[/font], where [font=courier new,courier,monospace]std::vector[/font] could be implemented with either 1-step or 2-step initialisation, regardless of it's ability to be reset back to a default state -- This seems an orthogonal issue, and should be equally possible under either idiom...

For times where you want to set a large amount of state back to some very particular values (e.g. loading a save game, re-loading the default level state), I'd just make sure that all of that data is POD, in as few contiguous blocks as possible, and uses offsets instead of pointers, so that it can be set/reset with just a single (or a few) [font=courier new,courier,monospace]memcpy[/font] calls.
Comparing this simple approach to fancy 2-step deserialisation systems with init order dependencies and pointer-patching... those systems just makes me think of the pejorative "enterprise software"...
I remember learning about the fundamental usefulness of invariants and pre- and post-conditions and how they can make reasoning about software easier and more correct. I remember as object-oriented tools because available outside academia in the 1980s, these engineering concepts were applied there and became ideas like object invariants.

Then, PCs became popular and evryone typed in BASIC from magazines and we were thrown back to the spaghetti code of the 1950s and 1960s. Then along came the web and everyone wrote JavaScript. Now, we see arguments whether good engineering practices developed through peer-reviewed journals over decades of experience is good (ie. using C++ constructors to construct objects) or whether it's best to throw spaghetti at the fridge and see what sticks because that's the way you've always done it (using C/Fortran/Basic-style initialization functions to set values in your structures).

I guess there are a large number of factors to take into consideration when you decide which methodology to use, ranging from your age and experience to whether you're going to be maintaining the software in the long term and how long it's been since you manager has actually touched code (the latter is usually the most important factor in any tech decision).

Stephen M. Webb
Professional Free Software Developer

Builder pattern solves this. Pass only the data or private implementation from builder to the constructor, and in constructor do only a no-throw member initialization. Builder can be many-step, polymorphic, and throw exceptions or return null values, whichever one likes. And the constructed objects are always usable and valid.

[quote name='Matias Goldberg' timestamp='1342479633' post='4959796']But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.
For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).
I also use this two-step approach when suitable.


And that is, in my opinion, where your design went bad. It only seems that the two-step approach is "much better suited" because earlier on, you failed to separate those parts of your game object that survive a map change from those that are map specific.

By knitting them both into the same class, you created a weird amalgamation which will cause raised eyebrows in many situations: if you want to save the game's state, you would have to carfully check what is permanent (= you save it) and what is level-specific (= don't save it, but pull it from somewhere after the object was loaded). If the map is unloaded, only a subset of the methods of those game objects may be used (= be careful to document which methods you may use when displaying eg. player stats in the menu without a loaded map).

Whether you implement such two-stage initialization with Init()/Shutdown() or Reinit() are just details - it's both two-stage initialization. Code duplication would be avoidable in both cases.
[/quote]
I won't deny that went wrong. But unless you're coding Tetris or Pac Man, design issues will always come out given enough scope. I do actually separate persistent from non-persistent data; but combinations were trickier than I anticipated.

These are all different kinds of reinits you should take into consideration (although some may not apply to all kinds of games). They're not the same and are treated differently:

  • Level reloading. Player died. Reloading should be very quick to prevent frustration. Of course, a well balanced game should prevent a player dieing often, but that's not technical issue. Anyway, YOU are going to die very often while balancing the game, and high reloading times don't help
  • Object reloading because a new level was started or a different area was reached. This is usually taken into consideration.
  • Object reloading for memory & performance: It's still the same level/area/whatever, but memory is going out of the charts and your engine is capable of destroying objects which are no longer needed until the player goes closer again to them. This is usually taken into consideration but rarely implemented the right way.
  • Reloading for in-place editing: This is often the most overlooked, the most versatile one (which is what makes it hard), and one of the most relevant! Iteration becomes very important into making a great fun game. And real-time editing is key for improving iteration. The point of this kind of "reload" is to prevent the designer from closing and opening the program again each time he makes a change. This could be a GUI modification, a stat value change, a different placement of an object, a change of size. It can go worse, it could be a change to a value used to precompute something at level-loading time. And you need to implement this kind of reload to be faster than closing and opening the game, not crashing the game (i.e. dangling pointers? div. by zero?), inconsistent states (most objects still using the old values)

These are all reloads that may be treated differently (specially the last one). And given enough complexity they start to become a bit contradictory, just in the same way that GPUs are faster when sorted front to back, traversing by shadowbuffer to save switching rendertargets, traversing by surface type to save switching shaders, and traversing by skeleton to keep the animation caches nice and warm, supposedly all at the same time (Yes, I just quoted TomF's blog's Scene Graphs article).

Oh, and I forgot... keep it FAST. In place editing can be done the right way. But then you get Blender-like or Maya-like performance. It's good, but nowhere near good for a real-time game. Or you can build your game to run very efficiently, but then there's a lot to preprocess or tag as "read only".
And make sure your reloading for "memory & performance" is done in the background. Framerate spikes are very bad for gameplay experience.

This is, among many reasons, why some engines opt for two different executables for the game editor and the game itself rather than one. That's ok, but just make sure you have the resources (namely time & money) to keep two different projects up to date. A brilliant design can minimize the effort to keep them both synchronized, but who said it was easy?

And like you said, avoiding code duplication is the key. And I can't make more emphasis on it. May be that part was missunderstood from my post? I never vowed in favour of duplicating code or tried to implied I ended up doing that. It's the other way around, I was trying to imply how to prevent it.

Cheers
Matias

This topic is closed to new replies.

Advertisement