Jump to content

  • Log In with Google      Sign In   
  • Create Account

Maik Klein

Member Since 14 Feb 2012
Offline Last Active Jul 29 2016 03:04 PM

Posts I've Made

In Topic: Performance questions regarding an entity component system

12 February 2016 - 12:35 PM

Does it do what you want within acceptable performance?
If this is the case then you could stick with it without impact.

I am probably overthinking it, there is no way to know what really works at this stage. You are right, I should just stick with it.

In Topic: Performance questions regarding an entity component system

12 February 2016 - 08:04 AM


Any tips are greatly appreciated.

There weren't really any questions there, so my only tip is to profile it.

Profile before, during, and after any changes. make absolutely certain you are changing things that really matter, and verify you measurably changed them for the better.

Have results from your profiling tools that say something that was cache inefficient is now cache efficient, or have a specific reduction in microseconds (or milliseconds if that's your problem). Always measure before and after, at the least. measure more if you can.


It is hard to do this because the design is so different. I would have to test multiple implementations which just would cost too much time. But I have some microbenchmarks like linear iteration vs linear iteration + a few jumps. It is just not very good indicator because I have no idea how the data will look in a real game.




The line with "jumps" jumps randomly forward somewhere inbetween [0,N) in memory and the horizontal graph is N = 0 to 100;


The bigger the jumps the bigger the difference. This is with a data structure of size 24bytes.


With 24bytes 1000000 iterations and random jumps inbetween [0,100), the linear iteration will be 2.25times faster.



That gives me close to perfect cache efficiency.

What about situations where a routine needs access to two or more components? What about where the relationship between the two components aren't predictable -- e.g. each component of class A is linked to a random instance of class B (many to 1 and/or 1 to 1).


Accessing two or more components is basicially free. If the relationship isn't predictable there are two cases. If a system has the pointers you can still iterate cache efficient, but if the components contain the pointers then you would get random jumps in memory.


But I don't think that the latter case will happen that often.

In Topic: Am I crazy for wanting to switch from Ue4 to Unity?

28 April 2015 - 12:16 PM


C++ compile times are necessarily longer than they ought to be (longer than comparably complex code in more modern languages by orders of magnitude). This is unfortunate, but there are steps you can take to reduce compile times

Much of those costs are also the source of the strengths. I don't think it is "longer than they ought to be". Yes, it takes more time, but that is a cost paid for an actual benefit.


The performance killers are well documented. Opening up every #include is time consuming, and features like template expansion cause compilation time to grow rapidly. However, the benefits from those compilation costs can be amazing.


One of the strongest features of the language is that the compiler digs deeply into every function and searches for things to elide and eliminate, nested calls to inline, code to relocate and merge, and evaluate an enormous number of potential substitutions in order to save a few nanoseconds at runtime. Trying to do the same thing with more modern languages, the deeply-nested call trees that ultimately resolve to a single data read, the logic that in some builds vanishes completely, these are opposite of the modern language functionality.  It is possible to get some of that benefit with JIT compilation and hotspot analysis done at run time, but that moves the cost to a different time that is often unacceptable in games.  Sure it is fine if one of the load-balanced business servers in a server rack slows down for a moment for a hotspot optimization, not so much if the game stutters.


The compilation model used by C and C++ and several other older languages include features that are completely at odds with features in more modern languages.  You cannot perform the heavy optimizations, the heavy stripping of dead code, extracting logic from common classes or from deeply-inlined calls, the complete removal of unused logic and functions, while at the same pulling in features like complete reflection of classes. 




So yes, the compilation times can be much faster in modern languages.  For fast iteration that is a good thing.  But since games are still pushing computers to their limits, there are many times when those extra compile times provide major benefits that cannot be recouped in modern languages.


It is a tradeoff. Use the right tools.



The same for switching from ue4 to unity. They are similar tools, but they have differences. If one is a better fit for the project at hand, use that one.


This might interest you http://blogs.unity3d.com/2014/05/20/the-future-of-scripting-in-unity/

In Topic: Am I crazy for wanting to switch from Ue4 to Unity?

25 April 2015 - 03:34 AM

Well I am following Ue4 guidelines. I started with the ShooterGame example which already had around 70 header files.


They create a file called something like MyGameClasses.h which contains every header file from your project. Then they put this header file in MyGame.h and MyGame.h needs to be in every file that you create.


Note that MyGameClasses.h is auto generated from their build tools. Then they use something called a unity build where they put everything in one file and then hit compile. 


I ripped everything apart and I am managing my headers now manually with proper forwarding etc and I turned the unity build off. Also they have a tool that parses every header file to generate the necessary reflection code which also needs some time to run. I have also put most header files in the .cpp files so that I can avoid an additional header parse.


Now I am pretty sure I can optimize the build times a bit more because it took me 3 hours to go over every file and fix the dependencies and I probably have done a few mistakes but I don't think that there are any low hanging fruits.


For the architecture I don't think I have too much freedom If I want to use unreals gameplay framework. But I still don't know why it has 27 dependencies, at least I can not find that much. I think it has something to do with their build tool and the way they generate the reflection code. I should investigate further.


I also do not want to use blueprint, I am not a big fan of visual scripting, also the blueprint system comes with its own problems.


I am thinking about integrating D or C# but I am not sure how much work that would require and I really have no experience with this kind of stuff.

In Topic: Am I crazy for wanting to switch from Ue4 to Unity?

24 April 2015 - 07:14 PM


A simple change in my Character.h will result in a 100 sec build time.


Honestly, 100 seconds is nothing to complain about. Actually 100 seconds for a build is blazing fast compared to the stuff I have to deal with daily. You can barely get up, walk to the kitchen and pour yourself a cup of coffee in that time!


If you want to get stuff done stick with the tech you're familiar with. Switching your entire technology base just because you don't like to wait two minutes for a build is just insane.




Fun fact, while typing this post originally I was actually waiting for a build to complete. I can guarantee you it took longer than 100 seconds ;)


Just to be clear, we are talking about incremental builds right? I usually program with some sort of REPL where I hit the compile button several times are minute. I probably have to lose that habit when I code in C++.


I've found programs (games) powered by Unreal to be much more responsive and less buggy than comparable games made with Unity.

I could be wrong - just a feeling.

However, as Radikalizm (cool name!) said: 2 minutes is nothing. You are blessed. ;)

Unity's build time is deceptive since you're not really compiling to machine code at all. Not a .NET wizard, but it gets compiled at least once more at runtime.

But it's jit'ed so you won't notice the "second" compile time. And if you do a release build I think Unity converts the the IL code to C++. I think they call it IL2CPP not sure if it is already live though.