Haven't written for a long time, but I've been busy working on my "tech", mostly preparing for my upcoming project. When I started to think through what I would like to write about, I've realized, that it is going to be a rather long one, hence the "part 1" in the title, as I'm planning to continue this topic in a week or two.
Now onto some tech talk, but before I go on, I have to tell you, that I'm a BIIIIIG software testing advocate, and this topic will mostly be about it. If you think, that thing is gibberish, do not continue !
So, I've been working in the last few weeks on improving my testing work-flow and the tech supporting it. As I've probably mentioned before, I have a small but really STABLE code-base, which I nicknamed "Magic Item" (lets call it framework from now on). I use this framework to build games. It is based on XNA, but mostly uses it for rendering, sound and input handling, and provides a bit higher level services, like a game-object framework, animated sprites, collision handling etc., that are common to all games regardless of their type.
I've emphasized stable for a good reason. Every single function, that gets into this framework is unit tested and documented. I'm confident, that it is close to impossible to develop software (especially big ones) in a long run without a proper automated regression and sufficient documentation. The way to achieve it, can be argued upon and is somewhat a personal preference (some like unit tests, some do not and prefer more focus on module/functional tests, some like api docs, some do not and prefer more of a feature based high-level documentation not tied to the code), but I hope many agrees, that it is a must (yes I'm being naive, I know).
I love the look of green test results in the morning .
This means I'm pretty thorough .
I've tried to create a lot of games before, from which many failed due to being too ambitious ideas or me not being persistent enough, but usually I could at least salvage some code from these projects, and build it into this little framework of mine. I've been developing the framework this way for years now, and every time I stumbled upon a feature, which could be useful for many games, I properly integrated it, by designing, implementing, testing and documenting the code. It is a small library, since this has always been more of a hobby endeavor, but due to it's level of polish, working on it or with it to create a game cheers me up!
Then came Operation KREEP. This was my second completed game project, but I've realized something really important during the development. I had to write a lot of code, specific only to this game, and I think this is pretty shameful, but I had no proper regression to back it up. In the last weeks of development I've been doing hours of manual testing just to make sure I did not break anything accidentally. I considered this a failure from a developer perspective, since I perfectly knew what I was doing and still did not prepare, wasting a lot of time. Though I also thought, that unit testing only just a small part of the high-level code in KREEP was not such a bad idea, since it is not the type of testing method which is able to cover a lot of functionality with a small time investment. So in the meantime, I've realized, that I have to find a cheap/smart way of testing the actual games I make, in an automatic fashion.
I've decided, that unit testing works perfectly for the framework code, but I have to reach a much higher test level for the game projects. My other requirements were, that it has to be stable (as deterministic as possible), simple to automatize, and really easy to create or extend test cases which are able to cover a lot of ground. Yep no worries, this is a trivial task !
I've been working on this testing method in the last three to four weeks, and I believe I've arrived at a really good compromise. I'm not going to go into too much detail in this post (I want to leave some stuff to talk about for next time ), but here goes the overall design:
The testing system and work-flow is based on "capture and replay". The framework provides an extensible system to capture events while you are playing and a mirror construct for replaying them (e.g.: like input device events, key presses, mouse moves etc..., but the client can define event types and capture+replay mechanisms). Other than replaying input events, replay files themselves can be extended to be filled with various checks to be done at a certain time or frame, and with some reflection magic, even tiny details can be checked in the game-world. This way, you can capture game-play footage as you are playing with the game, it can be replayed any time later, and it is easy to create test cases and to build a regression out of your recordings by adding various asserts and checks at certain points in them. I did my homework, I know all the positives and short-comings of "capture and replay" based testing. I worked my ass off to come up with a good solution for most of the problems, or at least to make the system work for me, instead of against me.
Most of the implementation is done. I've already hooked it up into NUnit, so replay based test cases can be executed with the NUnit runner (I use NUnit for unit testing too, so it was a natural choice), and the whole concept seem to work surprisingly well! I'm really proud of the result . Testing the final build of my next game will be a breeze .
In my next post (probably sometime around next week) I'm going to talk about details of my implementation and how I've approached the design of the system to achieve my requirements.
Until then, I wish you and your family a merry Christmas, and if I happen to be too busy (or lazy ) during the holiday and postpone the next post, a happy new year too!