Continuous Integration. The good, the bad and the ugly.
What is Continuous Integration? CI is an Agile Development technique that focuses first and foremost upon the integrity of the build. That is, we should strive to always have a buildable project. Any anyone getting from source control should have a version of the software that builds on their machine, allowing them to get in and test/develop or ever use it to their heart's content.
That sounds quite simple when you're a single developer working on a single software solution, but what happens when you start moving out into multiple components that span not only multiple projects, but even multiple development technologies? In my world, we're talking about .NET middle tier, SQL Server database tier, SSRS reporting and SSIS to tie them together as part of a "wider" ETL solution. In game dev terms, I hope that most of you at LEAST have various components and layers to your game - be it "engine" as one project and "game" in another. Anyone working in a client-server model will be familiar with this sort of setup - at least I hope so anyway ;)
But I digress. CI enforces the principle that every commit to source control should result in a working build. We use two CI platforms at work, both based on Microsoft Team Foundation Server. One solutions uses TFBuild, the other uses Crusie Control. The result is the same - as soon as a developer does a checkin the code is pulled from source control and built on a remote machine. Any failures in the build are made aware to everyone watching it; and the principle is primarily that of "if you break the build, you're responsible for fixing it". Fixing it doesn't always mean changing code yourself, it can mean working with people to ensure that their code integrates properly with yours.
The first good point of CI is that by definition, you should always have a buildable version of the project on your machine. The caveat here is that buildable doesn't guarantee it works. A point I'll talk about later. The second good point of this is that as CI often forces a build to be performed on an external machine (a build machine), you will often hit any compile-time or runtime dependency issues early in the project. The obvious benefit of this is that it helps to avoid the "it works on my machine" syndrome. Build machines can be virtual machines, or separate physical boxes somewhere. It only really matters if your build takes an age to complete.
This last point is quite important, and can require some thought. If your entire codebase takes hours to build, then a full rebuild strategy isn't good for every checkin to source control. Most CI systems are set up to incremental builds on checkin (eg: only build the bare miniumum) and then a full, clean, release build nightly (or whenever). This is good in two ways; the first being that you keep your build times down on incremental builds, but then you've also got the safety of a full clean/rebuild each night to shake any issues out.
A bad thing about CI is this - it really only works properly if you have tests built into the code. That is, your code gets checked in and then it self-tests for any issues the changes may have caused. This feature has saved my arse on many more occasions than I can count; a simple change has huge impacts later on down the line and can cause undesriable behaviours. If you have automated tests in pace to catch this, then CI is such a valuable tool (especially if developers run the test suites locally before checkin).
The major gripe I have here is that the code itself has to be both testable (eg: written in a way that it can be properly unit/component/integration tested), and that there are sufficient tests at hand to fully cover it.This can (and does, in my experience), cause a situation where you spend a significant period of time writing test cases for code (good, in my opinion), but then maintaining that test code as and when the code changes significantly. The upside of this is that in thoery you should have sufficient tests to understand the system behaviours enough to change them, and that the impact of changes should be visible via failing tests. Obviously, time spent writing and maintaining tests can yield less "production" code, so is hard to sell to people who don't follow this philosophy.
Another issue is that often code is written in a way in that it is inherently untestable - it may have many deep seated dependencies or call too many tightly integrated systems. Working in this way can often expose such bad code practice and force more decoupled code, which is a good thing - at least to many people. Some systems, however, will not easily lend themselves to this - especially those that have been deep tuned for performance.
In my experience so far, CI works better when implemented from the off. It's harder to retrofit into an existing solution, unless that solution has been designed to be built in a semi-configurable environment. How many of us have hard-coded databases, folder paths, server names, etc, in our code? Working in a CI manner foces us to actively break down those dependencies and make the code/data configurable. The benefits of doing this early cannot be underestimated. In my experience, it's often something that's bolted on at the end of the project, or even worse - at the ponit of release - "oh you mean the server isn't called XYZ?".
Another problem with CI is that developers can often spend several hours fixing and working on the integrity of the build (fixing integration issues, fixing failing tests, fixing various build problems on a remote machine). There are arguments to say that this will happen anyway, often as part of a fixed milestone build (weekly, daily, whatever) - so by fixing issues early and in small chunks can be beneficial to saving it all up to the end. I've seen that to avoid issues such as this, developers often shift to a "check in early, check in often" mindset, rather than saving up a day's/week's work for a single checkin, they'll be checking in more often to avoid the pain of having to fix on big changes. This can be good, but also bad, in that you may end up with suboptimal code checked in - although it should work - so it's all good, right? This mindset works well in environments like the ones I work in, with busy codebases and many people making small changes often. I would like to see how it works in more distributed environments, like how GIT encourages people to work (local builds, no "master" build, etc). I imagine it'd work well and should promote easy-to-merge change sets.
- CI can ensure a builable version of your code at most times
- CI promotes a release early, release often mindset
- CI will benefit you if you can properly isolate and build system components independently
- CI will benefit you if you have automatic tests in your code
- You can (and will) spend a lot of time fixing builds and tests in code
- Writing testable code to benefit CI can often require a mindset change
- CI in general can force a change in behaviours (small, frequent checkins)