40 seconds to compile a .cpp file - is it totally normal?

Started by
28 comments, last by JoeJ 4 years, 5 months ago

Finally if you still feel to suffer in compile time, you can modularize your project into standalone modules. Log-System for example is a good candidate to be excluded from the main project.

Separate them into own static library projects and then just use their public interface and have the linker doing the rest of the work. Static libraries don't produce additional overhead because they are statically linked together with the rest of your code and Visual Studio is well in deciding what you really used so it dose link only the code used in your final assembly.

This has the advantage if you turned incremental builds on, that you only recompile the projects with changes and keep anything else from the previous compiler run. So if you work on your game for example, only your game gets recompiled while the Log Module remains the same and safes compile time for that code. I expect the linker to be faster than the compiler because it dosen't has to struggle with macros, templates and so on

Advertisement
14 hours ago, lawnjelly said:

Probably made the mistake of #including windows.h... :D 

One of the tricks I use for non-template external libraries / functions that have 'expensive' includes, is to wrap them in your own .h / .cpp file. So you only include your own .h to use the functions rather than the third party. Yours can be clean and fast compiling, even if the external is rubbish.

Putting windows.h in the precompiled header also helps :)

52 minutes ago, Shaarigan said:

Finally if you still feel to suffer in compile time, you can modularize your project into standalone modules. Log-System for example is a good candidate to be excluded from the main project.

 

I would actually recommend going the other direction and making just a single .cpp files and putting everything else in headers.

The time required to fully compile a program is roughly proportional to (number of compilation modules)×(number of lines of code per compilation module), where a "compilation module" is a .cpp file plus all of the headers it includes.  There are two possible strategies for reducing compile times: reducing the number of compilation modules and reducing the size of the compilation modules.

There's a lot you can do to reduce the number of lines of code per compilation module, but it's ultimately a losing battle.  Just including a few headers from the standard library is going to add hundreds of thousands of lines to each compilation module.  Any template you use is going to be replicated across every compilation module that uses it.

On the other hand, reducing the number of compilation modules is easy, straightforward, and effective.  If there's just one compilation module, then every header file that your project uses is only going to be included once, and every template is going to be instantiated just once.

Just don't try to mix and match these two approaches.  When trying to minimize an A×B problem, it is most effective to get either A or B as low as it can possibly go, and more or less ignore the other one.  If you have just two compilation modules instead of just one, you're going have compile times about twice as long as if you went all the way.

9 minutes ago, a light breeze said:

I would actually recommend going the other direction and making just a single .cpp files and putting everything else in headers.

This is how unity builds work, and they are awesome (in fact you can have lots of .cpp files, you just include them all from one (or a few) .cpp file which is passed to the compiler, the effect is the same). It can still be a good idea to use a few different modules with unity builds though, so you get most of the benefits from both approaches.

Even with unity builds, even though the headers may only be included once, they can still take a while, if you are impatient like myself, so having wrappers can still be a good thing for thirdparty stuff.

As a project gets bigger, the link time can get more of an issue, my current preference is to have a bunch of modules and dynamically link them for fast iteration, and statically link for final release. That way if you touch a file in 1 module out of 10, it only needs to link 1/10 of the codebase. I suspect it may be even less than that in terms of linking time (link time might not scale linearly with size of project).

As an example I recently experimented making unity builds of Godot, which decreases the full build time:

https://github.com/lawnjelly/godot_SCU

Unfortunately for everyday developer iteration the link time is more of a rate limiting step because it is all built as one executable. i.e. you touch a file, and it can compile super fast, but still takes 30 seconds linking. With dynamic linking this could be reduced to e.g. a couple of seconds.

Quote

Why would you parameterize that with a macro, when you can just isolate that implementation detail into its own .CPP?

Your implementation looks quite nice, although really all I was trying to show was an example of wrapping an external library, rather than selection between libraries, perhaps I shouldn't have used windows.h as an example(!).

Having the macro in the header file is to ensure that calling the third party library is in the same translation unit. Compilers used to be rubbish at inlining anything that wasn't in a header files, but with link time optimization that is less of an issue now. I still prefer it out of habit (plus link time optimization is SLOOWWW :D ).

This is what my statement is about, you can put anything into a unity file but this also means building your whole architecture every time you change just a single line of code. If you can modularize your project, setting up Log System, Resource Manager or whatever once and never touch it, why should you want to put it into your unity file too. Instead compile them once and use a clever tool like Visual Studio that is able to know when to recompile. No need to process things never touched.

@lawnjelly's suggestion of using development time DLL/SO and release time static libraries is a good addition

There's a handy Visual Studio compiler flag called /d2cgsummary which can help diagnose where the compile time is going:

https://aras-p.info/blog/2017/10/23/Best-unknown-MSVC-flag-d2cgsummary/

Wow, a lot of people help, many thanks.  I'll also follow links and valuable advise many of you provided.  :)

Now, my compile+link is not intolerable anymore (improve by 30-50%) probably because I :-

  1. cleanly reinstall Windows 7
  2. use Visual Studio 2019 (instead of 2017)
  3. disable clang-tidy
  4. disable TortoiseHg menu server (because I noticed it used too much memory)
  5. try to not use Firefox to browse any RAM-expensive site (e.g. gamedev ?) while compiling

 

I think it is totally normal (it depends a lot on the include files), but it should not grow rapidly as you add more codes into the project.

Anyway if your system happens to have an SSD drives, try moving the project and the visual studio/Windows SDK onto that drive. It should helps.

http://9tawan.net/en/

16 hours ago, mr_tawan said:

Anyway if your system happens to have an SSD drives, try moving the project and the visual studio/Windows SDK onto that drive. It should helps.

Thank for teaching me how to use SSD.  I will consider to buy it too.  :) 

I did switch to SSD for the same reason, but in my case it did not help with compile times.

This topic is closed to new replies.

Advertisement