Accelerating code builds

Started by
15 comments, last by MJP 10 years, 11 months ago

Hi everyone,

I searched the forum a bit, and I bumped into this software called Incredibuild by Xoreax.

Does it actually work and save you lots of time? Can anyone provide some feedback or experience?

Thanks a lot
Eliot

Advertisement

Yes, it can potentially save a lot of time.

Individual items are distributed across the network, and each compilation unit is built on a separate machine.

For example, one machine builds ...\physics\PhysicsCollisionHelpers.cpp, another builds \physics\PhysicsKDTree.cpp, another builds ...\physics\PhysicsLooseQuadTree.cpp, and so on.

If you have a large project with a lot of build steps and a lot of files it can help drop a 20-minute build down to a 2-minute build.

It does introduce some overhead, and the time saved depends on the complexity of the build and the number of machines on the network available to participate.

A distributed build is just one of many possible ways to improve build times for C++'s archaic and slow compilation model.

Having used Incredibuild a fair amount, I say it is a mixed bag. For some things it is a great speedup but as we grew the usage our network ended up getting slammed to it's knee's and things slowed down again. At that point, doing a serious revisit of includes, forward refs and related brought the build times without Incredibuild to be about half the time with it, at which point we got rid of Incredibuild and finally got serious about making people be careful to forward ref and include only required items.

For our usage, it was just a crutch which only worked for a little while. We removed the problems and got rid of the crutch and were considerably happier all around.

yeah.

this also sort of brings up the good and bad points of "giant include files that includes everything":

pro: convenient;

con: may put some hurt on the build times.

part of the build-time issue may often be because each time a person may be compiling maybe around 250 lines to 1 kloc of code, with around 10 or 100 kloc of stuff pulled in from headers (so much of the compile time ends up going mostly into churning through the headers).

the alternative is being more specific, and limiting includes within source-files only to what is needed to compile this source file, and includes within headers only to those things needed for the header's declarations to be valid.

while faster, this does make it things a little more work, and people may often end up just copy-pasting a mess of "#include" lines from one file to another, not wanting to try to work out exactly which ones contain which declarations are needed for a given source file.

a partial compromise seems to be to split the internal and external parts of the headers, such that parts which are only needed within a given library are kept separate from the public API.

for example, the renderer might internally need "gl/GL.h" or "windows.h" or whatever else, but this doesn't mean that everything which might potentially call into the renderer also needs them (and with them being omitted, a lot of this other code may build that much faster).

a downside: this is a little more work to maintain, and the builds are still potentially a little slower.

on the upside, typically it still allows using unified per-library headers.

though, a person might still face other challenges to build time (for example, if using a "recursive make" build strategy or similar).

...

If the codebase relatively small use the "/MP" (under project properties, C++, general settings, Multi-processor Compilation) flag in Visual Studio which will make it use all of the cores in the CPU to compile the code.

Other then that you should be forward declarations pointer and reference types in the headers to avoid lots of includes in the headers you use.

Precompiled headers can help as well, as can unity builds although that is really a mixed bag and you should keep a single file compile configuration around so you know your batching isn't breaking single file compile.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

NightCreature83, at the company I work for we've been using IncrediBuild for quite some time now...

We believe that distributed builds like IncrediBuild and others are the only way to get the best execution time without losing efficiency. If you are considering unity builds be careful, it will make your debugging extremely difficult and your source code nearly impossible to maintain.

We used incredibuild with Unity builds still had 15minute full rebuilds though and that would be using 50 CPUs on a 1Gbit network connection. I never really had any issues with debugging those builds though. I know that leaking of globals happens and the usage of "using namespace <x>" becomes worse but then you should never use using anyway as it is one of the devil's keywords (together with goto).

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

FWIW: 15 minutes for a full rebuild is still a little steep though.

in my case, I have around a 5-minute full rebuild time, granted, most of the code is C with some mostly-C-like C++ (at present the codebase is around 700 kloc). (this is still with a few "unwieldy header issues" though...).

in partial rebuilds, a lot of the time seems to go into me using 'make' and it walking the code tree, though a trick for speeding this up was using some alternate target names that only try to rebuild certain parts of the codebase (rather than all of it). (looking at a clock, make walking over the project is a little under 1 minute).

I am using MSVC for Windows builds though, partly as IME it does seem to be a fair bit faster at compiling stuff than GCC is. (though I still use 'make' with MSVC, and tend to use Visual Studio more as a debugger). (partly this is historical: originally I was mostly using MinGW and came to MSVC mostly originally via the Platform SDK...).

going and testing '/MP' in a few places... yeah, that makes things a fair bit faster... (tried putting it on a few parts of the engine that often take a while to rebuild, such as the 3D renderer...).

FWIW: 15 minutes for a full rebuild is still a little steep though.

Given the size of the code base, it really isn't...



FWIW: 15 minutes for a full rebuild is still a little steep though.

Given the size of the code base, it really isn't...
granted, I am not sure the size of the codebase in question.


but, I more meant in terms of absolute time, not the time relative to the codebase size.
15 minutes is a bit of a while to wait for something to compile.

much like rebuilding Doom3 from source, which IME took long enough to recompile (around 30 minutes) to mostly kill my motivation to mess with it.
my line counters showed the Doom3 codebase at around 900 kloc (note: raw kloc).

there is only about a 22% difference in code-size (from my project), but a much greater difference in terms of build-times.

This topic is closed to new replies.

Advertisement