Experiences with continous integration anyone?

Started by
7 comments, last by Orymus3 11 years, 5 months ago
Hey all,

I'm currently reading into continuous integration (Wikipedia) and generally it sounds very nice. However as a lot of such things I am afraid, that it sounds nice, but may have some pitfalls when used in practice. I'm especially concerned that the setup and maintenance of such a system could consume quite a bit of energy and time.
So has anyone experience with a this?

What would also be interesting, if you're using automatic builds and testing, what are the metrics you use to find out progress?
Advertisement
To quote wikipedia
A complementary practice to CI is that before submitting work, each programmer must do a complete build and run (and pass) all unit tests. Integration tests are usually run automatically on a CI server when it detects a new commit. All programmers should start the day by updating the project from the repository. That way, they will all stay up-to-date.[/quote]We did this at our last job. We'd push our changes to a build machine, which would merge them with the latest version, then build and test on all platforms. If anything went wrong, it emailed you the errors to fix, otherwise it committed your changes to the repo and sent a success email.
Not sure about setup time, because it was up and running before I got there, but it was amazing to work in a place where there was never a broken build.
What would also be interesting, if you're using automatic builds and testing, what are the metrics you use to find out progress?[/quote]Same as always, people ticking off tasks (orthogonal to the build system).
We use it at work, and it is of great value to us, but I don't know if it would offer the same value to a lone developer (or even a small team).

The primary value is to catch the sort of subtle merge/rebase errors that are sure to occur once you have 50+ developers all committing to the same repositories. The secondary value is to catch when developers failed to run the tests before merging, or when tests fail intermittently.

In a small shop you are much less likely to encounter those issues, nor are you likely to need to push regular stable builds to 2nd-party developers.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

I've used that on a number of projects. Generally speaking, projects using this approach tend to take less debug time overall because we don't clutter bugs into bugs. As bugs are still rather fresh, there's isn't a lot of logic building up on higher layers making it hard to reconcile.
It tends to have its limitations though, and I've found that with larger teams, the iteration process would simply take longer.
A 3 dev team can iterate multiple times during the day, no problem. We get a lot of QA discussing matters orally and fixes on the fly.
With larger teams of, say, 12 devs, this becomes a bit more chaotic, and iterating simply takes longer, turning it into daily merges.

(We use jenkins for the most part)

It tends to have its limitations though, and I've found that with larger teams, the iteration process would simply take longer.


Could you elaborate this a bit more? What kind of limitations you encountered?

Jenkins came up in a discussion at my workplace, so is it any good?
Jenkins rocks if you can set it up properly.
The issue we're having with it is that the nightly builds aren't processing ok after a few days, so we're continually putting time towards re-configuring these the right way.
While Jenkins help us facilitate build deployments and whatnot, this reconfig takes its toll on our dev time, so the gain is not astronomical.

As for limitations, when you've got 20 people comitting (and resolving conflicts) on a single repo, and expect to iterate at this speed, you're bound to encounter some issues down the road. They vary... I wish I had a great example of this right now, but somehow I don't.
You should definitely try it out for yourself, I think with good production structures, this can definitely benefit devs. That said, I wouldn't see how this could apply to a single person dev team though.
Regarding speed, at my last job, when you wanted to commit some code, it would go into a queue on the build/test machines, which would take about 15 minutes per item (or shorter if it failed), so when adding new code for someone else (e.g. artist/designer) to use, that was the minimum delay. On milestone weeks with a lot of people committing, this queue would sometimes get up to about an hour.

As for limitations, when you've got 20 people comitting (and resolving conflicts) on a single repo...


Regarding speed, at my last job, when you wanted to commit some code, it would go into a queue on the build/test machines, which would take about 15 minutes per item...

Try 50+ developers and a build/test queue that takes upwards of an hour per item :)

Jenkins is truly a lifesaver in that situation, but it needs a dedicated team to keep it running, and a fair amount of developer discipline not to break everything.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


Try 50+ developers and a build/test queue that takes upwards of an hour per item

Ya, worked on a 100 dev team at Ubisoft. It was hell, and everything was running off Perforce alone...


Jenkins is truly a lifesaver in that situation, but it needs a dedicated team to keep it running, and a fair amount of developer discipline not to break everything.


Definitely agree with both.

This topic is closed to new replies.

Advertisement