C++ compile times

Started by
16 comments, last by Matt-D 11 years, 8 months ago
Hi

I only briefly looked into this and it didnt seems like something I am only dealing with, hopefully it isnt.

I have noticed with C++ projects (even CLR) build times increase as the project gets bigger which makes sense I guess, this is in comparison to C# where build times are exceptionally quick careless of how big a project gets. For example I am currently working on a project thats approx 95% C# with multiple class libraries, control libraries and unit tests, these all compile very very quick.

However C++ projects take a while even when projects contain very minimum careless of if I have pre-compiled headers on or not, now I understand the more external dependencies I have the longer it will take to compile (or at least thats what I thought the reason was) but I cant actually work out what makes C++ compilation slower, I always would have assumed C++ to be faster in compilation than managed languages.

Assuming this is fairly normal and standard, what exactly is it that causes C++ to compile noticeably slower than C# careless of if im compiling a full project or an individual class, I remember having a similar issue with F# in VS 2010 but that isnt so much the case in 2012

Oh this is also in VS 2010/2012 with default project settings, not sure if that matters or not
Advertisement
Are you using lots of templates yourself or boost libraries that are essentially template (i.e. all in header files) libraries?

Template instantiation can be slow. If your own code is using lots of templates try using explicit instantiation. Otherwise, if you're using something like boost::spirit, et. al., then you just have to expect long compile times. I mean there are things you can do to mitigate but anything that is doing extensive template meta-programming is just really slow to build.
Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.

For our C++ code base a full optimized rebuild takes about 3 hours; fortunately the app is segmented into libraries and such, building a single library only takes about 4 minutes.

Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.


I'm not sure why you say that. There's work to add support for modules to C++
As above, C++ is just designed badly with regards to compilation/linking when compared to newer languages.

Books like :"Large-Scale C++ Software Design" will teach you how to mitigate long compile-times with sensible use of the language.

MSVC in particular has a wonderful "incremental linking" option, which can drastically reduce you link times, however, it's extremely hard to convince VS to actually perform this task, even if after enabling it (it usually just does nothing!). You've actually got to get rid of all your libraries and add all your source files into the main EXE's project, OR use the little documented "Use library dependency inputs" option, which links against your library's input OBJ files instead of it's resulting LIB file.
Check out this, and the previous entries: http://www.altdevblo...the-holy-grail/
For our C++ code base a full optimized rebuild takes about 3 hours
Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.
A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.
I'm not sure why you say that. There's work to add support for modules to C++
Don't hold your breath. Agreement on the C++11 spec was only 10 years late...

[quote name='frob' timestamp='1344444404' post='4967434']For our C++ code base a full optimized rebuild takes about 3 hours
Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.
A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.
[/quote]
No, our build times are quite reasonable for the games in our studio.

Since almost all development is done in the smaller modules, a rebuild of the module only takes about 4 minutes, which fits your coffee break description. Most small edits take under a minute on the PC build, or when Edit and Continue is working it takes effect instantly (Yay Microsoft for Edit and Continue!) We also employ tools like Incredibuild so build times aren't painful for the programmers.

The full rebuild on build servers is the worst case scenario, and yes it really does take about three hours. Each platform is done in parallel on the build servers, it gets run directly and doesn't use Incredibuild when built on the build machines. Optimizations on those final build are turned up to maximum, including a bunch of incredibly slow program-wide optimizations.
Thanks for the replies, I just came across this http://stackoverflow.com/questions/318398/why-does-c-compilation-take-so-long#318440 which I oddly didnt spot when I was searching earlier.

I will also look into "Large-Scale C++ Software Design" too

Atm while compile times are noticeably slower they are manageable, I would say about 20 -> 30 seconds max which isnt extreme compared to some of the build times I have read about.

I'm not sure why you say that. There's work to add support for modules to C++


While that would be nifty, one of two things needs to happen if module support is added:

  1. Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.
  2. Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.

And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.

So yes, I suppose that it's possible in some far flung future for C++ to get module support that makes compilation times shorter. But what do you think its competition will be doing during that time?
In C++, each translation unit (typically one .cpp file) has to go through 7 stages of processing over the .cpp file itself and all of the headers that are included. #including iostream results in about 40,000 lines of code that has to be processed.

Due to things like operator and function overloading and templates, all of the syntax and semantics of a construct entirely depends on all of the code that comes before it, so there aren't very many shortcuts a compiler can use.

Each time a file is #included it can result in different code being generated, so every include file has to be processed independently for every translation unit, and if it is included more than once in a translation unit, some of the processing has to be done multiple times for that translation unit even if you use #ifndef protection.

A 1000 line C++ program can easily result in millions of lines of code actually being processed by the compiler.

And then there are the various compiler-specific optimizations that may have to process the resulting data structures thousands of times.

While there are some things that can be done with the language to make it inherently faster, I don't think it's a bad design. From the very beginning C++ was meant to be a language that emphasizes performance and flexibility over compiler performance 100%. Languages like C# make specific trade-offs of language features to improve build times. Neither is right or wrong, they're different tools for different jobs.

While that would be nifty, one of two things needs to happen if module support is added:

  1. Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.
  2. Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.

And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.


I believe they've decided on the first option in terms of supporting legacy compilation. The legacy issue is difficult, but not sure there's a better way. There could be an option in the compiler if you're working on a newer project to provide the faster build times.

This topic is closed to new replies.

Advertisement