• 13
• 15
• 27
• 9
• 9

# Performance questions regarding an entity component system

This topic is 764 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I recently wrote a small article about an entity component system https://maikklein.github.io/2016/01/14/Entity-Component-System/

The gist of it is that I layout components into groups which are completely contiguous in memory and then I filter those groups at compile time. That gives me close to perfect cache efficiency.

A problem with this approach was that the compile times went up quite a bit but I recently ported the ecs to D which compiles in under a second.

Another problem is that adding components is a bit expensive. If I want to add a component at runtime, I would copy all components of the entity to a new group. For example an entity is inside the component group<A,B,C> but if I want to add another component I would need to copy the components to "component group<A,B,C,D>".

This would make temporary components a bit expensive, I am not sure at this stage how big of a problem this would be.

Also I currently impelemented handles with "smart ptrs" which seemed like a nice idea at the time, it was super easy to implement and gives me great perfomance but I just relaized that this makes serialization a nightmare.

But switching to good old handles like

struct Handle{
int id;
int counter;
};

would be a problem because I move my components in memory to keep them contigous, which is not a big problem with my "smart ptrs" because they automatically update and null if the entity is deleted, but this doesn't really work with plain old handles like the handle struct that I have posted above.

Which makes me think if I should switch to a different design all together.

I think the most common ecs is something like this:

Systems already know which entities it needs to iterate over

For example:

PrintSystem: Required NameComponent and PrintComponent => EntityId1, EntityId30, EntityId44, EntityId64

Then the system is probably a lot more flexible and easier to implement. It even is contigous in memory but it doesn't necessarily iterate contiguously over the components.

For example the components of EntityId1 and EntityId30 could lie far away in memory from each other.

Any tips are greatly appreciated.

Edited by Maik Klein

##### Share on other sites

Any tips are greatly appreciated.

There weren't really any questions there, so my only tip is to profile it.

Profile before, during, and after any changes. make absolutely certain you are changing things that really matter, and verify you measurably changed them for the better.

Have results from your profiling tools that say something that was cache inefficient is now cache efficient, or have a specific reduction in microseconds (or milliseconds if that's your problem). Always measure before and after, at the least. measure more if you can.

##### Share on other sites

That gives me close to perfect cache efficiency.

What about situations where a routine needs access to two or more components? What about where the relationship between the two components aren't predictable -- e.g. each component of class A is linked to a random instance of class B (many to 1 and/or 1 to 1).

Edited by Hodgman

##### Share on other sites

Any tips are greatly appreciated.

There weren't really any questions there, so my only tip is to profile it.

Profile before, during, and after any changes. make absolutely certain you are changing things that really matter, and verify you measurably changed them for the better.

Have results from your profiling tools that say something that was cache inefficient is now cache efficient, or have a specific reduction in microseconds (or milliseconds if that's your problem). Always measure before and after, at the least. measure more if you can.

It is hard to do this because the design is so different. I would have to test multiple implementations which just would cost too much time. But I have some microbenchmarks like linear iteration vs linear iteration + a few jumps. It is just not very good indicator because I have no idea how the data will look in a real game.

The line with "jumps" jumps randomly forward somewhere inbetween [0,N) in memory and the horizontal graph is N = 0 to 100;

The bigger the jumps the bigger the difference. This is with a data structure of size 24bytes.

With 24bytes 1000000 iterations and random jumps inbetween [0,100), the linear iteration will be 2.25times faster.

That gives me close to perfect cache efficiency.

What about situations where a routine needs access to two or more components? What about where the relationship between the two components aren't predictable -- e.g. each component of class A is linked to a random instance of class B (many to 1 and/or 1 to 1).

Accessing two or more components is basicially free. If the relationship isn't predictable there are two cases. If a system has the pointers you can still iterate cache efficient, but if the components contain the pointers then you would get random jumps in memory.

But I don't think that the latter case will happen that often.

Edited by Maik Klein

##### Share on other sites
Does it do what you want within acceptable performance?
If this is the case then you could stick with it without impact.

##### Share on other sites

Does it do what you want within acceptable performance?
If this is the case then you could stick with it without impact.

I am probably overthinking it, there is no way to know what really works at this stage. You are right, I should just stick with it.

Edited by Maik Klein

##### Share on other sites

the ability to define new entity types from predefined component types without code access is the primary use for ECS.

ECS as a data oriented cache friendly optimization is a different matter. its an optimization - only required when profiling or recent experience dictates - and will usually come only after all possible clock cycles have already been wrung out of the render routines. To do so under other circumstances is most likely premature optimization.

Any tips are greatly appreciated.

if you don't need to define new entity types without code access, you don't need an ECS.

if you don't need to define new entity types without code access, a simpler non-ECS AoS approach should be prototyped first, and discarded in favor of ECS only if obviously too slow. never over-engineer stuff. the K.I.S.S. principle.

if you haven't already built the game, optimized all of render, and profiling says update is the next biggest bottleneck, you don't need an ECS as an optimization (yet).

if you last game was similar and required an ECS, you may need an ECS.

hardware is an ever shifting target. don't guess - profile.  slap a timer on it an see whats really going on.     "in profiling we trust, all others pay cash".

Edited by Norman Barrows