How Can I Reduce My Compile Times?

Started by
20 comments, last by TSlappy 13 years ago
Precompiled Headers just let multiple C++ files share a common set of header files so the compiler doesn't have to waste time reading, parsing, and compiling that code over and over and over.

Other approaches to reduce build times are to:

1) thread the build process and have machines with lots of cores do the building (this is all good until linking however)

2) Merge more cpp files together (Unreal's build tool does this, via temp files and #include'ing the cpps)

3) Redesign various subsystems to use a PIMPL design so the dependency on the subsystem can be reduced to an opaque pointer of some sort. This is very similar conceptually to using virtual functions in a class, except instead of making the functions virtual, PIMPL makes the data virtual. As a bonus the functions can be non-virtual if you don't need inheritance so the penalty for doing a data abstraction is quite small. This approach really really helps avoid having to do full rebuilds when changing the implementation, as only interface changes will causes larger rebuilds. You can also think of this approach as 'make a C interface and use that, with a class wrapper helper to avoid polluting the global namespace'. C interfaces for FILE via fopen/fclose are a PIMPL design for instance. In addition making a C interface can help with other features in other systems (lua binding etc)

4) Keep template classes bare bones or simplistic since highly specialized templates (say a file system wrapper or something which could have a lot of platform specific code inside) require pulling in insane amounts of dependies into the header in order to be able to do everything, and this hurts build times a lot.

5) Not including windows.h in files that don't need it can really help but this can be difficult to accomplish at times especially if you need the thread sync primitives (CRITICAL_SECTION, Events etc). Its possible to do a PIMPL design with classes that wrap the sync primitives and allocate them all from a heap or a pool of sync primitives, but you will lose out on some features (difficulty in getting arrays of Events for use with WaitForMultipleObjects for instance).

6) Implement a scripting language, so you don't have to rebuild the C++ code for every little change.

7) Make as much of the codebase data driven as possible, again so you don't have to rebuild the C++ code for every little change.

8) If link times are long you could break the project up into DLLs for development, though inlined things like math libraries are still going to ripple out and rebuild everything. A game engine is far better off as a monolithic app though but that could just be the difference between a shipping build and a developmental one.
http://www.gearboxsoftware.com/
Advertisement
6) Implement a scripting language, so you don't have to rebuild the C++ code for every little change.

I would recommend to use AngelScript for that, forums here on gamedev.net :D. Awesome ScriptingLanguage and easy to integrate.
If you say "pls", because it is shorter than "please", I will say "no", because it is shorter than "yes"
http://nightlight2d.de/
Use the PIMPL idiom. You code a public class and a private class. You keep all your private functions in the private class and just show the public/protected in the public. By doing this you memorise the headers in the public class and using also forward declarations will help minimise the header look up.

But also remember PIMPL is only useful for libraries, it's useless in main applications like exe's.
Another simple and obvious one I used to do to often, including iostream in nearly every file. I actaully rarely use it, mostly I just include string, vector, and map where needed, and I try not to use windows.h with WIN32_MEAN_AND_LEAN.
more megahurtz.

Not a fan of precompiled headers. Often it is a dump of various headers, and if you are not careful, can lead to poor practises.

But usually, reduce dependencies in code (kurbing on the number of headers, especially the ones involving templates or nasty buggers like windows.h, reducing their dependencies by using forward declarations, ..). Being disciplined about it and not going 'I'll fix that later' helps as well, as often it just creeps unnoticed (until you start slowing down again).

If you have big monolithic classes, that sometimes incurs a profusion of headers, and probably a rethink of what that class is suppose to do. Can't remember the name of the tool, but there are some that helps you track down dependencies and eliminate them. Sometimes it's not possible, and if something seems out of place, then your original design could be wrong in the first place.

Splitting a program into libraries is a way to force you to partition your code into logical units, and reduce dependencies between modules. however doing that after the fact is immensely painful. There is also a lot of time consumed by maintaining the discipline, and it is easy to over-do it.

Everything is better with Metal.

You may minimalize parsing headers by creating one or more special cpp files that include all other cpps and defining special macro that prevents parsing them again. Ok, example:

-- project.cpp
#define COMPILE_SPEEDUP
#include "a.cpp"
#include "b.cpp"

--a.cpp
#ifdef COMPILE_SPEEDUP
...
#endif


--b.cpp
#ifdef COMPILE_SPEEDUP
...
#endif

As a result, project.cpp will compile everything and all headers are parsed just once. It certainly helped us.

System programmer at 2K Games

fastest builds I've ever seen have been with unity builds.

http://buffered.io/2007/12/10/the-magic-of-unity-builds/

wickedly fast.

fastest builds I've ever seen have been with unity builds.

http://buffered.io/2...f-unity-builds/

wickedly fast.


If this speeds up the builds, then your bottleneck are includes and include parsing. Precompiled headers would achieve similar effect.

If your bottleneck is linker, the above will have less of an impact.

fastest builds I've ever seen have been with unity builds.

http://buffered.io/2...f-unity-builds/

wickedly fast.


Fast, yes. But it does come with side-effects.
-- gekko
I've heard about one more method to increase performance of compile / link. In case you have a lot of source / header files and its disk operations that are slowing you down, you could create a RamDisk and copy your project to it. You could compile from the RamDisk in such a case, what would eliminate disk read operations cost. Never tried this myself, but I've heard it can increase compile time quite a lot in some cases.

This topic is closed to new replies.

Advertisement