Article on improving MSVC compile time speed

Started by
7 comments, last by lawnjelly 6 years, 11 months ago

After 17 years of working in games, I've recently just written an article on some tests to improve C++ compilation speed inside MSVC 2015.

I managed to get compile times down from 13.1 to 2.1 seconds and have documented the steps.

Let me know what you think. Happy to answer questions either here or on the blog there.

http://weaseltrongames.blogspot.co.uk/2017/05/compiler-investigations.html

Cheers!

Adrian

www.weaseltron.com
@WeaseltronGames
https://twitter.com/WeaseltronGames
http://weaseltrongames.blogspot.co.uk/

Advertisement

tl;dr I managed to get my compile times down from around 13.1 seconds to 2.1s. That's 84% faster!


I would love to have compile times of 13.1 seconds in a C++ codebase. Most C++ codebases I've worked with have had compile times of more than 13 minutes. I've worked with a couple that took over an hour to compile from a full rebuild, and one where link times were upwards of 7 minutes on average.

*Potentially disable Windows Defender. You (probably) shouldn't be looking at porn at work anyway.


I'll add that it's possible to have Defender still running, but prevent it from checking certain folders.

*Stop #including massive header files from your header files. You're being silly and upsetting your coworkers.


This is a good one to call out. Include only what you use, forward declare everything you can. Precompiled headers are more trouble than they're worth on large projects.

On IncrediBuild and build parallelization - I've found empirically that performance tends to be I/O bound, especially when linking. Going to IncrediBuild is actually counter-productive if your hard disk can't handle the load. Having your code on an SSD seems like a must if you're going for a really heavy parallel build.

Hi and thanks for the comments.

Yeah I'm used to much longer compile times but it's impractical to recompile so many times on projects where it takes minutes or hours to compile. For this I was most interested in finding the potential impact of various changes. It's true, though that the bottlenecks in a larger codebase will differ to those in a smaller project. I find the link times the most galling on large projects but that's the subject of another post.

Good point about having excluded directories for Defender. I'll run some more tests and update - as obviously not all the files it checks will be in the project directory (.exe, .dlls, registry, whatever else).

I strongly suspect for the reasons you mention and the reasons in the post that #includes in headers make a massive difference in larger code bases. I suspect that one include in the wrong place could massively negatively impact performance though this didn't show up so much in mine.

Precompiled headers - I wanted to just test having the system includes in there. IIRC MSVC won't let you maintain multiple precompiled headers in a single project ... which is just silly.

Code would always be on an SSD for me - it's just been historically on the HDD as compile times of 13s is hardly too much of a chore!

I've found unity builds to provide the most improvement to build times, since they tackle the biggest time sink (I/O) head-on by simply eliminating those redundant disk operations. Of course an SSD helps too, but the absence of I/O is always fastest ;)

IncrediBuild also helps a bit, but since it addresses build times from the CPU processing side, it really relies on having lots of idle agents sitting around to handle your requests. Even then, it maxes out somewhere around 20 cores per build (IIRC). However, *combining* it with unity builds is definitely a winning combination. We were once able to take a codebase that built in 90 minutes (vanilla Visual Studio build) down to 8 minutes. As a matter of fact, the full rebuild was sometimes faster than the incremental build...

How many lines of code are you testing this with? Do you use templates?

For reference, my game's hitting around 2mins full rebuild with 150k LOC.

@Zipster: To be clear I didn't see any evidence that I was I/O bound though I'd expect that to make more of a difference on large (game-sized) projects during the linking stage.

My experience is typically with AAA games and I don't see a reason to not have your code on an SSD if you can (perhaps using a junction point to keep the game data where performance often makes less difference). The biggest win for an SSD is with a cold cache which is of course relevant.

IME, larger projects often have very long link times and neither Unity builds and Incredibuild don't help those. On some projects I've compiled nearly just as quick on a many core PC than on a large build farm.

I'll check LOC (when I find a consistent way of doing that). I don't make massive use of complex templates but there's an Array class that gets used extensively.

How many lines of code are you testing this with? Do you use templates?

For reference, my game's hitting around 2mins full rebuild with 150k LOC.

It looks to be around 47k files over 571 files using the trick here: https://yomotherboard.com/how-to-get-total-line-count-in-visual-studio-2013/ to count the lines of code...

I'll add that it's possible to have Defender still running, but prevent it from checking certain folders.

I just updated the article with this. It saves some but not most of the time. On one test the compile time went from 12.2s (base) to 11.7s (saves 0.5s) with the project directory excluded vs 9.7s (saves 1.9s) with it off.

I'd be interested to see which virus checker makes the least difference.

Aside from being careful with headers, nested includes etc, I too found the biggest win using unity builds, changed my life.. :lol:

For my current little game I'm getting 15 seconds with a full rebuild, ~150K LOC according to the visual studio trick, about 2 seconds of which is my code, the rest is third party stuff which I couldn't get in the unity build like lua, and linking.

One thing for the linking, being someone who favours static linking of runtime libraries, I think I found that linking the dynamic libraries was faster, so I often do that for debug builds, and static link for release.

If you can, I believe building some of your stuff that is less likely to change as dlls for development builds might help quite a bit in very large projects, in terms of speeding up iteration.

This topic is closed to new replies.

Advertisement