Jump to content
  • Advertisement
  • entries
  • comments
  • views

Continuous Integration. The good, the bad and the ugly.

Sign in to follow this  


DISCLAIMER: I don't work in the game industry. I'm not claiming CI is good or bad for people in the game industry. I thought I'd share my experiences anyway.

What is Continuous Integration? CI is an Agile Development technique that focuses first and foremost upon the integrity of the build. That is, we should strive to always have a buildable project. Any anyone getting from source control should have a version of the software that builds on their machine, allowing them to get in and test/develop or ever use it to their heart's content.

That sounds quite simple when you're a single developer working on a single software solution, but what happens when you start moving out into multiple components that span not only multiple projects, but even multiple development technologies? In my world, we're talking about .NET middle tier, SQL Server database tier, SSRS reporting and SSIS to tie them together as part of a "wider" ETL solution. In game dev terms, I hope that most of you at LEAST have various components and layers to your game - be it "engine" as one project and "game" in another. Anyone working in a client-server model will be familiar with this sort of setup - at least I hope so anyway ;)

But I digress. CI enforces the principle that every commit to source control should result in a working build. We use two CI platforms at work, both based on Microsoft Team Foundation Server. One solutions uses TFBuild, the other uses Crusie Control. The result is the same - as soon as a developer does a checkin the code is pulled from source control and built on a remote machine. Any failures in the build are made aware to everyone watching it; and the principle is primarily that of "if you break the build, you're responsible for fixing it". Fixing it doesn't always mean changing code yourself, it can mean working with people to ensure that their code integrates properly with yours.

The first good point of CI is that by definition, you should always have a buildable version of the project on your machine. The caveat here is that buildable doesn't guarantee it works. A point I'll talk about later. The second good point of this is that as CI often forces a build to be performed on an external machine (a build machine), you will often hit any compile-time or runtime dependency issues early in the project. The obvious benefit of this is that it helps to avoid the "it works on my machine" syndrome. Build machines can be virtual machines, or separate physical boxes somewhere. It only really matters if your build takes an age to complete.

This last point is quite important, and can require some thought. If your entire codebase takes hours to build, then a full rebuild strategy isn't good for every checkin to source control. Most CI systems are set up to incremental builds on checkin (eg: only build the bare miniumum) and then a full, clean, release build nightly (or whenever). This is good in two ways; the first being that you keep your build times down on incremental builds, but then you've also got the safety of a full clean/rebuild each night to shake any issues out.

A bad thing about CI is this - it really only works properly if you have tests built into the code. That is, your code gets checked in and then it self-tests for any issues the changes may have caused. This feature has saved my arse on many more occasions than I can count; a simple change has huge impacts later on down the line and can cause undesriable behaviours. If you have automated tests in pace to catch this, then CI is such a valuable tool (especially if developers run the test suites locally before checkin).

The major gripe I have here is that the code itself has to be both testable (eg: written in a way that it can be properly unit/component/integration tested), and that there are sufficient tests at hand to fully cover it.This can (and does, in my experience), cause a situation where you spend a significant period of time writing test cases for code (good, in my opinion), but then maintaining that test code as and when the code changes significantly. The upside of this is that in thoery you should have sufficient tests to understand the system behaviours enough to change them, and that the impact of changes should be visible via failing tests. Obviously, time spent writing and maintaining tests can yield less "production" code, so is hard to sell to people who don't follow this philosophy.

Another issue is that often code is written in a way in that it is inherently untestable - it may have many deep seated dependencies or call too many tightly integrated systems. Working in this way can often expose such bad code practice and force more decoupled code, which is a good thing - at least to many people. Some systems, however, will not easily lend themselves to this - especially those that have been deep tuned for performance.

In my experience so far, CI works better when implemented from the off. It's harder to retrofit into an existing solution, unless that solution has been designed to be built in a semi-configurable environment. How many of us have hard-coded databases, folder paths, server names, etc, in our code? Working in a CI manner foces us to actively break down those dependencies and make the code/data configurable. The benefits of doing this early cannot be underestimated. In my experience, it's often something that's bolted on at the end of the project, or even worse - at the ponit of release - "oh you mean the server isn't called XYZ?".

Another problem with CI is that developers can often spend several hours fixing and working on the integrity of the build (fixing integration issues, fixing failing tests, fixing various build problems on a remote machine). There are arguments to say that this will happen anyway, often as part of a fixed milestone build (weekly, daily, whatever) - so by fixing issues early and in small chunks can be beneficial to saving it all up to the end. I've seen that to avoid issues such as this, developers often shift to a "check in early, check in often" mindset, rather than saving up a day's/week's work for a single checkin, they'll be checking in more often to avoid the pain of having to fix on big changes. This can be good, but also bad, in that you may end up with suboptimal code checked in - although it should work - so it's all good, right? This mindset works well in environments like the ones I work in, with busy codebases and many people making small changes often. I would like to see how it works in more distributed environments, like how GIT encourages people to work (local builds, no "master" build, etc). I imagine it'd work well and should promote easy-to-merge change sets.

In summary:
  • CI can ensure a builable version of your code at most times
  • CI promotes a release early, release often mindset
  • CI will benefit you if you can properly isolate and build system components independently
  • CI will benefit you if you have automatic tests in your code
  • You can (and will) spend a lot of time fixing builds and tests in code
  • Writing testable code to benefit CI can often require a mindset change
  • CI in general can force a change in behaviours (small, frequent checkins)
    I hope that post is useful in some way, at least in terms of provoking thought and/or discussion.
Sign in to follow this  


Recommended Comments

I routinely walk away for the night (usually in disgust) from projects that don't compile. But then, nobody ever accused me of being a software engineer. How's the galaxy these days?

Share this comment

Link to comment
It isn't new to Agile development, it has been around much longer than that.

You comment that it can be hard to retrofit code, but really those problems are with your code to begin with. How can anyone suppose a hard-coded paths and server names are a good thing? Those are flaws in your code whether you use CI or not.

Any time you have a team making changes you must integrate your changes back in. Sometimes there will be conflicts. The benefit is that in conjunction with a continuous build, you always know that the build is working. The alternative is to wait until you've got a huge bunch of changes to resolve; you'll have a bigger chance of conflicts due to the longer time frame, and a harder time fixing it because you may have been trying to fix multiple bugs in that area of code. You agree that it is a good thing to integrate frequently for those reasons.

But you suggest that you would need to submit incomplete modules. I disagree. You certainly should check in atomic changes, not incomplete modules. You suggest that this could be a problem because you feel writing the code could take a week or more. If a chunk of code [i]really[/i] takes a full week to write you've got some serious problems with your design. I can often implement features in a day or two, sometimes three for big tasks, and bug fixes are generally a matter of minutes or a few hours. If you can't get something working in a full week then you really haven't done a good job of breaking down your design into tasks.

Most places I've worked have followed this model, even a decade before the term Agile Development was coined in 2001. The few places that didn't follow it had serious workflow problems. It works very well. Everyone works in their local branch. They integrate to the common or main line frequently. There is a continuous build server or server farm that constantly rebuilds the product (game or not), ensures that it compiles, and ensures that tests pass. If there is a problem then everyone is notified because it is now unsafe to sync.

Without a continuous integration and continuous build, you could go for days weeks with a completely broken build before anyone notices. The resulting down time of tracking down the breaking change and fixing it causes serious delays, sometimes several days or weeks with everyone trying different configurations and hunting down the problem. We did that at one company, and I remember one particularly bad mess that took almost a full month (about a full work year) to correct. It would have been cheaper to simply hire somebody full time to manage the continuous build system than to fix that one single problem. They were generally down several days a month due to bad integrations, with a lead programmer almost always hunting for the latest breaking change.

To me the decision is a no-brainer.

Share this comment

Link to comment
Completely agree that you need some way of automated builds (idependent wether CI or nightly). It does force you to not hardcode paths, if makes sure the QA has a 'fresh build' on a regular basis and you don't get developers trying to manually put together a build copying files from whatever network locations (it does go wrong at one point).
I would even go one step further and say that the build should result in not only an executable - but in an installer for a complete product (including all assets, sounds, scripts, .... ).

>Build machines can be virtual machines, or separate physical boxes somewhere. It only really matters if your build takes an age to complete.
If a full build takes "ages" to complete - then you have a serious problem anyway which needs to be addressed. It means in effect that your development team is waiting out a (big) part of the day doing "nothing". Be pro-active and look for options like incredibuild or optimize your #include-strategy ( [url="http://kitt3n.homeftp.net/wiki/dev/index.php/Buildtimes"]http://kitt3n.homeft....php/Buildtimes[/url] ).

>The major gripe I have here is that the code itself has to be both testable
Write tests for where it makes sense, math-code or in general code which is 'easily' tested - usually lower-level code. Keep it simple - it's no use to test a "big" system automatically this way because imo that does take too much time - that's the job of your QA department.

>If a chunk of code [i]really[/i] takes a full week to write you've got some serious problems with your design.
Agreed, however I've been in situations where you want the trunk to remain stable but need to do a compatibility breaking change. In this case you can decide to make a feature-branch. This gives you the possibility to do non-atomic commits (without disturbing anyone) and merge it back the moment you are done.

>But you suggest that you would need to submit incomplete modules. I disagree. You certainly should check in atomic changes, not incomplete modules.
imo make a seperate commit for every 'feature' (no matter how small). Then at least you have a chunk of code which is related when you track back in history. Which is way better then having 10 features pushed into one big commit.

Share this comment

Link to comment
Interesting article with some valid points. I disagree on the point where you say that it gets tedious to maintain the test code though. Good tests should make only abstract interface calls that give back some results that are easy to check. If the underlying implementation changes, your test code shouldn't.

Share this comment

Link to comment
Poor written tests can be a realy big issue of contiuous integration. If you have a lot of bad tests, which needs a lot of time to maintain the accepance for writing tests of the devlopers can shrink more and more. So it make sence to review the test-code as well.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!