Finalspace

Unreal engine 4 compiles since one hour...

Recommended Posts

Finalspace    1146

This is insane! I am compiling unreal engine 4 in visual studio 2017 (nvidia flex for 4.16) since one hours and its still not done.

Sure its not the fastest computer in the world (i5 2500k running at 3,4 Ghz with 4 cores, 16 gig ram, GTX 1060 6 GB).

 

I dont understand why modern applications are not built in a way, so that you compile one translation unit for each library/application and thats it - just including the cpp files directly and use a "implementation" sheme.

As i can see from the compiling output even with paralell compiling and include file caching its that slow -.-

 

Seriously, c++ is a great language - but the way how c++ source are composes (Header and source splitting) are totally broken in all ways.

 

Look at the compile output. Its absoluty nuts, including the fast that this output is larger than pastebin allows -.-

http://root.xenorate.com/final/ue4_16_flex_first_compile_insane.txt

Done:

48>Total build time: 52,91 seconds (Local executor: 51,45 seconds)

Insane... nothing more to say.

 

Just to see i will compile it on my i7 rig (4 ghz, 8 cores, 16 gigs ram, gtx 970) too.

Edited by Finalspace

Share this post


Link to post
Share on other sites
Scouting Ninja    3968

Strange I used to build my own build Unreal 4 and it compiled fast, was using my laptop at the time. It's much weaker than a i5 with a AMD FX 8120 8 core at 3.10GHz.

Maybe there is a other factor?

 

It was however slow when you start it up at first, stopped building after six updates; had to make changes to my addons when I updated. So now I have one old Unreal that is self build and one that is installed. The download size is about the same give or take a GB, so I switched to using Plugins instead; still have to update them each release.

Share this post


Link to post
Share on other sites
Finalspace    1146

I cannot edit the initial post, here are the actual question:

 

Why is it so slow?

 

My answer:

- Its just too much c++ files going on (11213 .cpp and 36520 .h files on unreal engine 4 flex edition) -> Resulting in too many translation units.

- Not even sure if there are .obj file caching works, i see a lot of files compiled multiple times...

- Visual studio IDE slows down the compiler :D

- My media center i had compiled it on is very slow (i5, source was stored on a non-ssd drive)

Edited by Finalspace

Share this post


Link to post
Share on other sites
Archduke    178

Google around, there are flags that you can set to speed up compile times (at the cost of space for caching files). My build times are consistently under a minute though, even when I'm building from a clean project.

 

Edit: Are you talking about compiling the engine itself from source? Of course it takes forever, engines are big and complicated. You shouldn't have to do it more than once in a long while, though.

Edited by Archduke

Share this post


Link to post
Share on other sites
Finalspace    1146
25 minutes ago, Archduke said:

Google around, there are flags that you can set to speed up compile times (at the cost of space for caching files). My build times are consistently under a minute though, even when I'm building from a clean project.

 

Edit: Are you talking about compiling the engine itself from source? Of course it takes forever, engines are big and complicated. You shouldn't have to do it more than once in a long while, though.

I am talking about compiling the entire engine of course, which is required when you want to use nvidia flex or other nvidia stuff in it.

Sure its a complicated thing, but any application which requires more than ~3-5 minutes to full compile is a no go. Complexity is no excuse for this.

 

The only think i would accept in terms of increasing compile times, when there are some of asset preprocessing going on, but when i look at this compilation output - its just cpp everywhere.

Share this post


Link to post
Share on other sites
Archduke    178
11 hours ago, Finalspace said:

I am talking about compiling the entire engine of course, which is required when you want to use nvidia flex or other nvidia stuff in it.

Sure its a complicated thing, but any application which requires more than ~3-5 minutes to full compile is a no go. Complexity is no excuse for this.

 

The only think i would accept in terms of increasing compile times, when there are some of asset preprocessing going on, but when i look at this compilation output - its just cpp everywhere.

I have noticed some odd speed issues while (supposedly) compiling small cpp files in my own project. The build output will say that it's compiling a 20-line file with few includes, but it will hang for 10-20 seconds, then speed through the rest of the build. Either things are being processed that it isn't outputting status on, or there are improvements to be made.

Share this post


Link to post
Share on other sites
cgrant    1826
On 6/18/2017 at 9:10 AM, Finalspace said:

Seriously, c++ is a great language - but the way how c++ source are composes (Header and source splitting) are totally broken in all ways.

Source code organization is not enforced by the language. This is an organizational decision. Still doesn't answer your question, but one can only speculate why your compile time is so slow. In my experience I find that heavily templated code usually compiles much slower than the non-templated version. I have no experience with UE4 source code so I don't know if this applies.

Share this post


Link to post
Share on other sites
Finalspace    1146
20 hours ago, cgrant said:

Source code organization is not enforced by the language. This is an organizational decision. Still doesn't answer your question, but one can only speculate why your compile time is so slow. In my experience I find that heavily templated code usually compiles much slower than the non-templated version. I have no experience with UE4 source code so I don't know if this applies.

Yes is true that the language do not force you to a particular organization scheme, but 99% of all c++ applications are composed of thousands of small .cpp files which all gets compiled to translation units separately. This process is much much slower than compiling just a couple of translation units. Compiling one giant translation unit file including tons of other cpp files directly are much faster than compiling each file separatly.

 

Its the same as when you upload thousands of small image files to your web storage - its painfully slow, even when you upload 3-4 images at once. But uploading just a single compressed archive containing all image files is a lot quicker.


The only reason why you want to prevent large translation units is because of some size limitations of the compiler itself, but i am not sure about this.

 

I am pretty confident that you can build applications much much faster when you just have one translation unit for each library/executable.

 

- Guard all .cpp files with a ifndef block like this:

#include "physics.h"


#ifndef PHYSICS_IMPLEMENTATION
#define PHYSICS_IMPLEMENTATION
  
// ...
  
#endif //PHYSICS_IMPLEMENTATION

 

- In the main translation unit for the executable or library:

// All source files are included directly in this translation unit once + only
// The order is important, if physics for example uses rendering you have to include rendering first.
// If rendering requires physics, you have to add another layer between rendering and physics
#include "rendering.cpp"
#include "physics.cpp"
#include "audio.cpp"

// STB Truetype does not include the implementation automatically, you have to set this constant before including the header file
#define STB_TRUETYPE_IMPLEMENTATION
#include "stb_truetype.h"

#include "assets.cpp"

// ..

 

- Setup your IDE/Editor that it will compile the main translation unit only. In visual studio you change the item type for every .cpp file to C/C++ header.

 

Thats is all you may need to get compilation done much faster. Try it out.

The only downside of this method, you have to keep the order and do not include .cpp files into other .cpp files directly - except for the main translation unit.

 

And yes, making heavy use of templates also increases compile time drastically. Thats the reason why i use them very rarely - mostly for containers like pools and hash tables or to replace nasty macros.

Edited by Finalspace

Share this post


Link to post
Share on other sites
jpetrie    13149
1 hour ago, Finalspace said:

I am pretty confident that you can build applications much much faster when you just have one translation unit for each library/executable.

 

This is a terrible idea on any well-architected project of any scale, though. If you only compile a single TU which is effectively token-pasting (via the preprocessor) every other source file, then you're effectively recompiling the entire code base on any change. In other words, you're throwing away one of the very few advantages one can capitalize on in the C++ compilation model: separate compilation and linking. On modern platforms, discarding separation of TU compilation also tends to discard the ability to distribute compilation, further increasing compilation times.

This is a reasonable approach for small projects that want a simple (source-only) distribution mechanism to avoid binary file format distribution problems (a description that fits most of Sean Barret's libraries which you are referencing). Once a project grows beyond the point where any change should cause a full-recompile, it starts to become exponentially less of a great idea.

Share this post


Link to post
Share on other sites
Finalspace    1146
2 hours ago, jpetrie said:

This is a terrible idea on any well-architected project of any scale, though. If you only compile a single TU which is effectively token-pasting (via the preprocessor) every other source file, then you're effectively recompiling the entire code base on any change. In other words, you're throwing away one of the very few advantages one can capitalize on in the C++ compilation model: separate compilation and linking. On modern platforms, discarding separation of TU compilation also tends to discard the ability to distribute compilation, further increasing compilation times.

This is a reasonable approach for small projects that want a simple (source-only) distribution mechanism to avoid binary file format distribution problems (a description that fits most of Sean Barret's libraries which you are referencing). Once a project grows beyond the point where any change should cause a full-recompile, it starts to become exponentially less of a great idea.

Yes its true it may only be usuable for small to medium size projects, because it will only work with full recompiles.

Also while developing your software architecture you may end up recompiling everything anyway - especially when you dont know the full architecture yet and fiddle with the entire code base.

I agree for profen stable code which do not change a lot, this can live in its own translation units.

 

But i still think doing a full compilation for one half an hour on a modern computer is unacceptable - even for large projects like unreal engine 4.

Edited by Finalspace

Share this post


Link to post
Share on other sites
jpetrie    13149

Unreal isn't a very good example of C++ that is well-designed for efficient compilation. The techniques that they use to alleviate compilation speeds are more band-aids over legacy code and architecture than they are examples of good techniques to start from.

There are a ton of knobs and switches you can fiddle with in Unreal to tweak compilation models, which ones have you looked at? Normally I use Incredibuild with UE to distribute the compiles, but even when that is down for some reason, a full rebuild doesn't take me an hour and a half.

What unity build settings are you using?

Share this post


Link to post
Share on other sites
frob    44971

For reference, a "unity build" is different from the Unity game engine.

It is one of the techniques to speed up builds, often giving dramatic performance increases when doing large code rebuilds.

It extends the idea of precompiled headers. In a precompiled header instead of processing hundreds or possibly thousands of header files that are included through a seemingly endless chain of references, then re-processing those same files for the next compilation unit, all the macro expansion, file loading, preprocessing, parse trees, and other items are generated then saved and reused.

With a unity build, you've got one file that has a #include for a bunch of .cpp files. This turns all those files into an enormous compilation unit. It is generally much longer for an incremental build because more files are touched than a regular compilation unit, but for large builds and engine rebuilds it means fewer files are touched overall, there is less parsing and processing done in aggregate, and all the complicated structures and grammar rules remain in memory as a few hundred .cpp files are processed all at once in a single process instead of the compiler being launched for each .cpp file individually with all the stuff getting dumped and re-processed for each.

 

A full rebuild of the last project I was on, a AAA title, took just over five hours to rebuild the code for the optimized debug build when not distributed with IncrediBuild. The final gold build (with all optimization settings cranked up) took about a day when run on a single machine.

Share this post


Link to post
Share on other sites
Finalspace    1146
12 hours ago, jpetrie said:

Unreal isn't a very good example of C++ that is well-designed for efficient compilation. The techniques that they use to alleviate compilation speeds are more band-aids over legacy code and architecture than they are examples of good techniques to start from.

There are a ton of knobs and switches you can fiddle with in Unreal to tweak compilation models, which ones have you looked at? Normally I use Incredibuild with UE to distribute the compiles, but even when that is down for some reason, a full rebuild doesn't take me an hour and a half.

What unity build settings are you using?

No settings, i was just building the target "Development Editor" - so you can run the editor directly from the IDE.

And i most likely wont use the UE4 source again, except for looking up some things. I was just trying out the NVIDIA Flex thing.

 

43 minutes ago, frob said:

For reference, a "unity build" is different from the Unity game engine.

It is one of the techniques to speed up builds, often giving dramatic performance increases when doing large code rebuilds.

It extends the idea of precompiled headers. In a precompiled header instead of processing hundreds or possibly thousands of header files that are included through a seemingly endless chain of references, then re-processing those same files for the next compilation unit, all the macro expansion, file loading, preprocessing, parse trees, and other items are generated then saved and reused.

With a unity build, you've got one file that has a #include for a bunch of .cpp files. This turns all those files into an enormous compilation unit. It is generally much longer for an incremental build because more files are touched than a regular compilation unit, but for large builds and engine rebuilds it means fewer files are touched overall, there is less parsing and processing done in aggregate, and all the complicated structures and grammar rules remain in memory as a few hundred .cpp files are processed all at once in a single process instead of the compiler being launched for each .cpp file individually with all the stuff getting dumped and re-processed for each.

 

A full rebuild of the last project I was on, a AAA title, took just over five hours to rebuild the code for the optimized debug build when not distributed with IncrediBuild. The final gold build (with all optimization settings cranked up) took about a day when run on a single machine.


Looks like unity build is exactly what i am talking about, putting multiple source files in bigger translation units and compile those instead. Next time i am working with a bigger project, i will definitily look into this.

 

Oh my gosh, one day for a full optimized build? Insanity. But i am not a professional game developer, so maybe thats normal these days?

Share this post


Link to post
Share on other sites
frob    44971

It all depends on the project.  

Games range from a single developer on their own for hobby titles, a team of 5, 10, 30, or even 50 for more common games. Some reach into the hundreds of developers. For that game there were hundreds of developers spread across multiple studios, developing the product for several years.  

And as I wrote, that type of non-distributed rebuild done on an individual workstation almost never happened. People would pull down intermediate files from build servers so they don't need to build them locally, and when they do need to build parts tools like IncrediBuild automatically distribute the build to several machines.

Share this post


Link to post
Share on other sites
jpetrie    13149
9 hours ago, Finalspace said:

Looks like unity build is exactly what i am talking about, putting multiple source files in bigger translation units and compile those instead. Next time i am working with a bigger project, i will definitily look into this.

 

You usually don't put _all_ files into one TU, like you originally suggested; it's more like 10-15 source files, often grouped to achieve a uniform overall TU size, or something. Correctly choosing which files go into a unity build unit can have a huge impact on how successful the technique is.

In my experience, unity builds never outperform well-curated non-unity builds in terms of build performance in real-life, day-to-day operational scenarios. However, it costs time and effort to ensure you curate the TU dependencies of a non-unity build correctly (only include the minimum set of headers, forward declare whenever possible, et cetera) which is harder to maintain as a project goes on and especially when there are more than a handful of people working on it. I don't generally recommend single developers or small teams try to leverage unity builds, as they can usually shoulder the burden of optimal physical design of the code.

For an alternative perspective, consider the physical design of the Our Machinery engine, which in theory produces extremely fast compilation for typical scenarios, but is pretty "out there."

Share this post


Link to post
Share on other sites
swiftcoder    18437

"Kids these days..." :)

Back when I worked on Android, clean builds took 4 hours on a beefy developer desktop. Everybody was careful to work such that they only touched a few packages at a time, and could make do with incremental builds. If you had to do a clean build, you kicked it off as you left work, and hoped it succeeded by morning.

One of the rare joys of working up the stack (in JVM land) these days, is that dependencies are all delivered as compiled jars, and you never, ever have to perform a clean build from the ground up.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now