Jump to content

  • Log In with Google      Sign In   
  • Create Account


C++ compile times


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Memories are Better   Prime Members   -  Reputation: 769

Like
0Likes
Like

Posted 08 August 2012 - 10:27 AM

Hi

I only briefly looked into this and it didnt seems like something I am only dealing with, hopefully it isnt.

I have noticed with C++ projects (even CLR) build times increase as the project gets bigger which makes sense I guess, this is in comparison to C# where build times are exceptionally quick careless of how big a project gets. For example I am currently working on a project thats approx 95% C# with multiple class libraries, control libraries and unit tests, these all compile very very quick.

However C++ projects take a while even when projects contain very minimum careless of if I have pre-compiled headers on or not, now I understand the more external dependencies I have the longer it will take to compile (or at least thats what I thought the reason was) but I cant actually work out what makes C++ compilation slower, I always would have assumed C++ to be faster in compilation than managed languages.

Assuming this is fairly normal and standard, what exactly is it that causes C++ to compile noticeably slower than C# careless of if im compiling a full project or an individual class, I remember having a similar issue with F# in VS 2010 but that isnt so much the case in 2012

Oh this is also in VS 2010/2012 with default project settings, not sure if that matters or not

Sponsor:

#2 jwezorek   Crossbones+   -  Reputation: 1798

Like
0Likes
Like

Posted 08 August 2012 - 10:31 AM

Are you using lots of templates yourself or boost libraries that are essentially template (i.e. all in header files) libraries?

Template instantiation can be slow. If your own code is using lots of templates try using explicit instantiation. Otherwise, if you're using something like boost::spirit, et. al., then you just have to expect long compile times. I mean there are things you can do to mitigate but anything that is doing extensive template meta-programming is just really slow to build.

Edited by jwezorek, 08 August 2012 - 10:35 AM.


#3 frob   Moderators   -  Reputation: 20462

Like
1Likes
Like

Posted 08 August 2012 - 10:46 AM

Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.

For our C++ code base a full optimized rebuild takes about 3 hours; fortunately the app is segmented into libraries and such, building a single library only takes about 4 minutes.

Edited by frob, 08 August 2012 - 10:48 AM.

Check out my personal indie blog at bryanwagstaff.com.

#4 wood_brian   Banned   -  Reputation: 197

Like
0Likes
Like

Posted 08 August 2012 - 11:00 AM

Many books have been written about improving C++ build times. Exactly which of the thousands of slow patterns your code uses is something we cannot tell. Many powerful and useful design patterns have slow compile times in C++ compilers.

C++ compile times are slow. It will always be slow due to the design of the language.


I'm not sure why you say that. There's work to add support for modules to C++


#5 Hodgman   Moderators   -  Reputation: 29671

Like
4Likes
Like

Posted 08 August 2012 - 11:02 AM

As above, C++ is just designed badly with regards to compilation/linking when compared to newer languages.

Books like :"Large-Scale C++ Software Design" will teach you how to mitigate long compile-times with sensible use of the language.

MSVC in particular has a wonderful "incremental linking" option, which can drastically reduce you link times, however, it's extremely hard to convince VS to actually perform this task, even if after enabling it (it usually just does nothing!). You've actually got to get rid of all your libraries and add all your source files into the main EXE's project, OR use the little documented "Use library dependency inputs" option, which links against your library's input OBJ files instead of it's resulting LIB file.
Check out this, and the previous entries: http://www.altdevblo...the-holy-grail/

For our C++ code base a full optimized rebuild takes about 3 hours

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.
A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

I'm not sure why you say that. There's work to add support for modules to C++

Don't hold your breath. Agreement on the C++11 spec was only 10 years late...

Edited by Hodgman, 08 August 2012 - 11:08 AM.


#6 frob   Moderators   -  Reputation: 20462

Like
0Likes
Like

Posted 08 August 2012 - 11:32 AM

For our C++ code base a full optimized rebuild takes about 3 hours

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.
A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

No, our build times are quite reasonable for the games in our studio.

Since almost all development is done in the smaller modules, a rebuild of the module only takes about 4 minutes, which fits your coffee break description. Most small edits take under a minute on the PC build, or when Edit and Continue is working it takes effect instantly (Yay Microsoft for Edit and Continue!) We also employ tools like Incredibuild so build times aren't painful for the programmers.

The full rebuild on build servers is the worst case scenario, and yes it really does take about three hours. Each platform is done in parallel on the build servers, it gets run directly and doesn't use Incredibuild when built on the build machines. Optimizations on those final build are turned up to maximum, including a bunch of incredibly slow program-wide optimizations.
Check out my personal indie blog at bryanwagstaff.com.

#7 Memories are Better   Prime Members   -  Reputation: 769

Like
0Likes
Like

Posted 08 August 2012 - 11:48 AM

Thanks for the replies, I just came across this http://stackoverflow.com/questions/318398/why-does-c-compilation-take-so-long#318440 which I oddly didnt spot when I was searching earlier.

I will also look into "Large-Scale C++ Software Design" too

Atm while compile times are noticeably slower they are manageable, I would say about 20 -> 30 seconds max which isnt extreme compared to some of the build times I have read about.

#8 Telastyn   Crossbones+   -  Reputation: 3726

Like
2Likes
Like

Posted 08 August 2012 - 12:31 PM

I'm not sure why you say that. There's work to add support for modules to C++


While that would be nifty, one of two things needs to happen if module support is added:
  • Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.
  • Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.
And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.

So yes, I suppose that it's possible in some far flung future for C++ to get module support that makes compilation times shorter. But what do you think its competition will be doing during that time?

#9 krippy2k8   Members   -  Reputation: 646

Like
1Likes
Like

Posted 08 August 2012 - 04:10 PM

In C++, each translation unit (typically one .cpp file) has to go through 7 stages of processing over the .cpp file itself and all of the headers that are included. #including iostream results in about 40,000 lines of code that has to be processed.

Due to things like operator and function overloading and templates, all of the syntax and semantics of a construct entirely depends on all of the code that comes before it, so there aren't very many shortcuts a compiler can use.

Each time a file is #included it can result in different code being generated, so every include file has to be processed independently for every translation unit, and if it is included more than once in a translation unit, some of the processing has to be done multiple times for that translation unit even if you use #ifndef protection.

A 1000 line C++ program can easily result in millions of lines of code actually being processed by the compiler.

And then there are the various compiler-specific optimizations that may have to process the resulting data structures thousands of times.

While there are some things that can be done with the language to make it inherently faster, I don't think it's a bad design. From the very beginning C++ was meant to be a language that emphasizes performance and flexibility over compiler performance 100%. Languages like C# make specific trade-offs of language features to improve build times. Neither is right or wrong, they're different tools for different jobs.

#10 wood_brian   Banned   -  Reputation: 197

Like
0Likes
Like

Posted 08 August 2012 - 08:54 PM

While that would be nifty, one of two things needs to happen if module support is added:

  • Legacy compilation is supported. Then you're still in largely the same boat. The compiler can't make shortcuts because it needs to deal with the legacy gotchas that make it horrible today.
  • Legacy compilation is not supported. Then the expansive pile of existing code that is one of C++'s few strengths is useless. Then you get to wait N years for all of the library writers to modularize their code.
And that's after all of the spec writers settle on a good way to bolt modules onto the language (while still supporting template behaviors? good luck), and after all of the compiler writers actually implement the behavior in something vaguely resembling a standard way.


I believe they've decided on the first option in terms of supporting legacy compilation. The legacy issue is difficult, but not sure there's a better way. There could be an option in the compiler if you're working on a newer project to provide the faster build times.

Edited by rip-off, 09 August 2012 - 01:24 PM.
Removed off topic remarks.


#11 NightCreature83   Crossbones+   -  Reputation: 2752

Like
0Likes
Like

Posted 09 August 2012 - 01:50 AM


For our C++ code base a full optimized rebuild takes about 3 hours

Is that debug/development/shipping builds for 3 different platforms? That sounds ridiculous TBH.
A rebuild for a single target shouldn't grow longer than a coffee break ;) if it does, your lead needs a good prodding to fix things.

No, our build times are quite reasonable for the games in our studio.

Since almost all development is done in the smaller modules, a rebuild of the module only takes about 4 minutes, which fits your coffee break description. Most small edits take under a minute on the PC build, or when Edit and Continue is working it takes effect instantly (Yay Microsoft for Edit and Continue!) We also employ tools like Incredibuild so build times aren't painful for the programmers.

The full rebuild on build servers is the worst case scenario, and yes it really does take about three hours. Each platform is done in parallel on the build servers, it gets run directly and doesn't use Incredibuild when built on the build machines. Optimizations on those final build are turned up to maximum, including a bunch of incredibly slow program-wide optimizations.

What's the reason for not using incredibuild on these machines? As building a release or close to disc version of your game should be part of your build monitor process.

Languages like C# make specific trade-offs of language features to improve build times. Neither is right or wrong, they're different tools for different jobs.

Also most of the managed languages that need to be compiled actually do this whilst you are writing it. This is what makes C# compilations so blindingly fast.

Edited by NightCreature83, 09 August 2012 - 01:54 AM.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#12 krippy2k8   Members   -  Reputation: 646

Like
0Likes
Like

Posted 09 August 2012 - 04:31 AM

Also most of the managed languages that need to be compiled actually do this whilst you are writing it. This is what makes C# compilations so blindingly fast.


C# compilation is still very fast without this. I use an embedded C# compiler in my game engine to build cs files on launch and I can build a whole directory full of dozens of cs files in less than 20 seconds.

(OT) XCode 4 builds C++ code while you're writing it, and it's really annoying, especially since they decided this feature was so great that there was no need to be able to initiate a compile of an individual source file in a project any more :/ I had to buy a new top of the line Mac Pro just so I wouldn't have to wait 10 minutes every time I tweaked a template method to make sure it wasn't going to break my build.

Edited by krippy2k8, 09 August 2012 - 04:33 AM.


#13 eppo   Crossbones+   -  Reputation: 2406

Like
1Likes
Like

Posted 09 August 2012 - 04:45 AM

The /MP 'build with multiple processes' switch can speed compilation up dramatically. It doesn't work with incremental rebuilding though.

Edited by eppo, 09 August 2012 - 04:45 AM.


#14 Nitage   Members   -  Reputation: 810

Like
0Likes
Like

Posted 09 August 2012 - 05:20 AM

The dominating reason for slow times is the preprocessor, no question. The preprocessors #include directive can mean compiling a 100 line file requires processing several orders of magnitude more code, and #if... directives make it very hard to cache the results of processing a file.

There are also other issues; C++ compilers do more than C# ones. Some parts of C# compilation are deferred until runtime (JIT) and the Microsoft C# compiler doesn't optimize nearly as aggressively as their C++ compiler or g++. For example, it doesn't ever perform tail call optimization (there's a IL 'tail' instruction which the F# compiler emits, but the C# one doesn't). This is minor in comparison to the preprocessor issue though.

#15 Telastyn   Crossbones+   -  Reputation: 3726

Like
0Likes
Like

Posted 09 August 2012 - 06:32 AM

The dominating reason for slow times is the preprocessor, no question.


C# has pre-processor #if as well. It's limited, but does not have a _huge_ impact on compile times.

#include semantics are a larger culprit.

#16 Hodgman   Moderators   -  Reputation: 29671

Like
2Likes
Like

Posted 09 August 2012 - 06:47 AM

C# has pre-processor #if as well. It's limited, but does not have a _huge_ impact on compile times.

Huh, I didn't know that. Limited compared to C, but still useful. Learn something new every day!

#include semantics are a larger culprit.

Yep, which is why forward declarations, PIMPL-type idioms, and tools like Header Hero are extremely important on large projects.

It's all too easy for your post-preprocessed cpp files to blow out to 100,000 lines of code. However, sensible use of the language can stop this from happening.

#17 dougbinks   Members   -  Reputation: 484

Like
0Likes
Like

Posted 09 August 2012 - 10:06 AM

A good number of methods for speeding up compile times already mentioned above. Pre-compiled headers,unity builds, and virtual functions can also help along with distributed compilation with distcc or Incredibuild. Although I've seen full builds of several hours in a few game studios, we've usually been able to get this down to 20 minutes or so with the majority of changes only needing a few minutes.

The shameless plug here is that compile times are one of the reasons some friends and I developed the Runtime Compiled C++ open source framework.

#18 Matt-D   Crossbones+   -  Reputation: 1451

Like
0Likes
Like

Posted 09 August 2012 - 01:49 PM

Take a look at the "Modules in C++" proposal by Daveed Vandevoorde (quoted below), perhaps it will be able to address this issue in the future (it also addresses the backward compatibility issues and smooth transition from the #include world): http://www.open-std..../2012/n3347.pdf

Regarding the timing:
"There's a lot of new things to look forward to in the upcoming versions of the C++ standard. Here's a high-level overview of what transpired in Kona and what I personally think would be nice to see in the next version(s) of C++. I mention versions because the committee has decided that we'll be working on getting a short-term update to the standard tentatively shooting for 2017 (C++1y or C++17) and a long-term update aiming for 2022 (C++22).
. . .
Modules -- not dynamic libraries (although those would be nice to have too) but more on logical grouping of symbols for replacing header files and #include. There are two different proposals on Modules for C++ and there's a Study Group formed to address this particular issue. This looks to be one of the things that might be making it into C++17."

http://www.cplusplus...-libraries.html


4.1 Improved (scalable) build times


Build times on typical evolving C++ projects are not significantly improving as hardware
and compiler performance have made strides forward. To a large extent, this can be
attributed to the increasing total size of header files and the increased complexity of the
code it contains. (An internal project at Intel has been tracking the ratio of C++ code in
“.cpp” files to the amount of code in header files: In the early nineties, header files only
contained about 10% of all that project's code; a decade later, well over half the code
resided in header files.) Since header files are typically included in many other files, the
growth in build cycles is generally superlinear with respect to the total amount of source
code. If the issue is not addressed, it is likely to become worse as the use of templates
increases and more powerful declarative facilities (like concepts, contract programming,
etc.) are added to the language.
Modules address this issue by replacing the textual inclusion mechanism (whose
processing time is roughly proportional to the amount of code included) by a precompiled
module attachment mechanism (whose processing time—when properly implemented—
is roughly proportional to the number of imported declarations). The property that client
translation units need not be recompiled when private module definitions change can be
retained.
Experience with similar mechanisms in other languages suggests that modules therefore
effectively solve the issue of excessive build times.


See also:
http://stackoverflow.../modules-in-c0x
http://permalink.gma...st.devel/228346
http://herbsutter.co...ndards-meeting/

Edited by Matt-D, 09 August 2012 - 02:03 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS