Unreal engine 4 compiles since one hour...

Started by
16 comments, last by swiftcoder 6 years, 10 months ago

This is insane! I am compiling unreal engine 4 in visual studio 2017 (nvidia flex for 4.16) since one hours and its still not done.

Sure its not the fastest computer in the world (i5 2500k running at 3,4 Ghz with 4 cores, 16 gig ram, GTX 1060 6 GB).

 

I dont understand why modern applications are not built in a way, so that you compile one translation unit for each library/application and thats it - just including the cpp files directly and use a "implementation" sheme.

As i can see from the compiling output even with paralell compiling and include file caching its that slow -.-

 

Seriously, c++ is a great language - but the way how c++ source are composes (Header and source splitting) are totally broken in all ways.

 

Look at the compile output. Its absoluty nuts, including the fast that this output is larger than pastebin allows -.-

http://root.xenorate.com/final/ue4_16_flex_first_compile_insane.txt

Done:


48>Total build time: 52,91 seconds (Local executor: 51,45 seconds)

Insane... nothing more to say.

 

Just to see i will compile it on my i7 rig (4 ghz, 8 cores, 16 gigs ram, gtx 970) too.

Advertisement

Strange I used to build my own build Unreal 4 and it compiled fast, was using my laptop at the time. It's much weaker than a i5 with a AMD FX 8120 8 core at 3.10GHz.

Maybe there is a other factor?

 

It was however slow when you start it up at first, stopped building after six updates; had to make changes to my addons when I updated. So now I have one old Unreal that is self build and one that is installed. The download size is about the same give or take a GB, so I switched to using Plugins instead; still have to update them each release.

I cannot edit the initial post, here are the actual question:

 

Why is it so slow?

 

My answer:

- Its just too much c++ files going on (11213 .cpp and 36520 .h files on unreal engine 4 flex edition) -> Resulting in too many translation units.

- Not even sure if there are .obj file caching works, i see a lot of files compiled multiple times...

- Visual studio IDE slows down the compiler :D

- My media center i had compiled it on is very slow (i5, source was stored on a non-ssd drive)

Hmm, thats weird - i had compiled it on my dev rig too, but it was just 50% faster. Also the 28.72 seconds are wrong... its minutes!

 

34>Total build time: 28,72 seconds (Local executor: 27,69 seconds)

Google around, there are flags that you can set to speed up compile times (at the cost of space for caching files). My build times are consistently under a minute though, even when I'm building from a clean project.

 

Edit: Are you talking about compiling the engine itself from source? Of course it takes forever, engines are big and complicated. You shouldn't have to do it more than once in a long while, though.

25 minutes ago, Archduke said:

Google around, there are flags that you can set to speed up compile times (at the cost of space for caching files). My build times are consistently under a minute though, even when I'm building from a clean project.

 

Edit: Are you talking about compiling the engine itself from source? Of course it takes forever, engines are big and complicated. You shouldn't have to do it more than once in a long while, though.

I am talking about compiling the entire engine of course, which is required when you want to use nvidia flex or other nvidia stuff in it.

Sure its a complicated thing, but any application which requires more than ~3-5 minutes to full compile is a no go. Complexity is no excuse for this.

 

The only think i would accept in terms of increasing compile times, when there are some of asset preprocessing going on, but when i look at this compilation output - its just cpp everywhere.

11 hours ago, Finalspace said:

I am talking about compiling the entire engine of course, which is required when you want to use nvidia flex or other nvidia stuff in it.

Sure its a complicated thing, but any application which requires more than ~3-5 minutes to full compile is a no go. Complexity is no excuse for this.

 

The only think i would accept in terms of increasing compile times, when there are some of asset preprocessing going on, but when i look at this compilation output - its just cpp everywhere.

I have noticed some odd speed issues while (supposedly) compiling small cpp files in my own project. The build output will say that it's compiling a 20-line file with few includes, but it will hang for 10-20 seconds, then speed through the rest of the build. Either things are being processed that it isn't outputting status on, or there are improvements to be made.

On 6/18/2017 at 9:10 AM, Finalspace said:

Seriously, c++ is a great language - but the way how c++ source are composes (Header and source splitting) are totally broken in all ways.

Source code organization is not enforced by the language. This is an organizational decision. Still doesn't answer your question, but one can only speculate why your compile time is so slow. In my experience I find that heavily templated code usually compiles much slower than the non-templated version. I have no experience with UE4 source code so I don't know if this applies.

20 hours ago, cgrant said:

Source code organization is not enforced by the language. This is an organizational decision. Still doesn't answer your question, but one can only speculate why your compile time is so slow. In my experience I find that heavily templated code usually compiles much slower than the non-templated version. I have no experience with UE4 source code so I don't know if this applies.

Yes is true that the language do not force you to a particular organization scheme, but 99% of all c++ applications are composed of thousands of small .cpp files which all gets compiled to translation units separately. This process is much much slower than compiling just a couple of translation units. Compiling one giant translation unit file including tons of other cpp files directly are much faster than compiling each file separatly.

 

Its the same as when you upload thousands of small image files to your web storage - its painfully slow, even when you upload 3-4 images at once. But uploading just a single compressed archive containing all image files is a lot quicker.


The only reason why you want to prevent large translation units is because of some size limitations of the compiler itself, but i am not sure about this.

 

I am pretty confident that you can build applications much much faster when you just have one translation unit for each library/executable.

 

- Guard all .cpp files with a ifndef block like this:


#include "physics.h"


#ifndef PHYSICS_IMPLEMENTATION
#define PHYSICS_IMPLEMENTATION
  
// ...
  
#endif //PHYSICS_IMPLEMENTATION

 

- In the main translation unit for the executable or library:


// All source files are included directly in this translation unit once + only
// The order is important, if physics for example uses rendering you have to include rendering first.
// If rendering requires physics, you have to add another layer between rendering and physics
#include "rendering.cpp"
#include "physics.cpp"
#include "audio.cpp"

// STB Truetype does not include the implementation automatically, you have to set this constant before including the header file
#define STB_TRUETYPE_IMPLEMENTATION
#include "stb_truetype.h"

#include "assets.cpp"

// ..

 

- Setup your IDE/Editor that it will compile the main translation unit only. In visual studio you change the item type for every .cpp file to C/C++ header.

 

Thats is all you may need to get compilation done much faster. Try it out.

The only downside of this method, you have to keep the order and do not include .cpp files into other .cpp files directly - except for the main translation unit.

 

And yes, making heavy use of templates also increases compile time drastically. Thats the reason why i use them very rarely - mostly for containers like pools and hash tables or to replace nasty macros.

1 hour ago, Finalspace said:

I am pretty confident that you can build applications much much faster when you just have one translation unit for each library/executable.

 

This is a terrible idea on any well-architected project of any scale, though. If you only compile a single TU which is effectively token-pasting (via the preprocessor) every other source file, then you're effectively recompiling the entire code base on any change. In other words, you're throwing away one of the very few advantages one can capitalize on in the C++ compilation model: separate compilation and linking. On modern platforms, discarding separation of TU compilation also tends to discard the ability to distribute compilation, further increasing compilation times.

This is a reasonable approach for small projects that want a simple (source-only) distribution mechanism to avoid binary file format distribution problems (a description that fits most of Sean Barret's libraries which you are referencing). Once a project grows beyond the point where any change should cause a full-recompile, it starts to become exponentially less of a great idea.

This topic is closed to new replies.

Advertisement