Jump to content
  • Advertisement
hyyou

C++ ECS : mega entity array VS many component arrays

This topic is 380 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have recently read T-machine's Entity Component System (ECS) data structure. (link)

Its "best" version (4th iteration) can be summarized into a single image (copy from T-machine's web) :-

image.png.d88580612f8a76efc3783296f7ab0602.png

This approach also appears in a VDO 

" rel="external">
  , with slide and source code ( https://github.com/CppCon/CppCon2015/tree/master/Tutorials/Implementation of a component-based entity system in modern C++ ) .

There are many people (including me) believe that the above approach lacks data locality, because Position-1,Position-2,Position-3, ... are not stored in a contiguous array.

However, many attempts failed to elegantly make it contiguous.  (e.g. stackoverflow's link

  • I also realized that, to make it contiguous, the cost of query Entity<-->Component is unavoidably significantly higher. (I profiled)

I would like to hear that ....

I know that its depend on data access pattern (i.e. cache miss).  I still want to hear more opinion/argument about it, because I may need to rewrite a large part of my engine to make it support 30,000+ entity (personal greed).    

Thank.

 

 

Share this post


Link to post
Share on other sites
Advertisement

Because of curiosity + stubbornness, I converted my engine to use T-machine's approaches.

Today, I find that T-machine's technique is slower by 2-3% for my game.

It is a little depressing though from the fact that T-machine's is far more difficult to implement.

Kylotan is absolutely correct. 

IMHO, it is very hard for ECS to reach 30,000 non-trivial entities (without multithread) while maintaining 60fps.

Share this post


Link to post
Share on other sites

I was going to post some harsh criticism of the linked T-Machine article, but (along with the previous, well known series) it speaks for itself: figuring out what's wrong is a valuable learning experience that everyone deserves to make.

Just to point out the biggest problem, if components aren't the same size they cannot be stored in the same "megaarray", and if they are the same size they are wasting memory (if padded) and/or coherent access (if the array contains homogeneous "handles" of data stored elsewhere); there must be a good reason to do that.

Share this post


Link to post
Share on other sites

The biggest problem can be (arguably) solved by using assemblage.      https://gamedev.stackexchange.com/questions/58693/grouping-entities-of-the-same-component-set-into-linear-memory/58701#58701

However, this approach introduces a lot of inherent disadvantages :

  • Micro memory management is needed
  • harder to code
  • hard to remove component
  • lose some data locality of the same components
  • pooling is less efficient at entity level
  • entity type must be established before create it.

I believed the most tempting performance advantage might be that every query inside a certain entity is very fast.      However, my profiling result suggest that I was wrong again.

Oh, I used ~2 weeks to appreciate and believe this fact  (sad face)

Edited by hyyou

Share this post


Link to post
Share on other sites
8 hours ago, hyyou said:

A proliferation of arrays of different aggregates of components is the opposite of a single array of heterogeneous components. Which of the two flawed approaches have you tried?

Imposing arbitrary constraints (for example, that components of entities that have a different set of components should be segregated) and then suffering the consequences (that entities cannot gain or lose components, at least in a neat way, that visiting all components of a certain type is inefficient because of noncontiguous layout and complicated due to multiple arrays, etc.) without thinking of more sophisticated techniques is a typical stylistic trait of the T-Machine blog posts.

 

Share this post


Link to post
Share on other sites
Quote

Which of the two flawed approaches have you tried?

In my flawed prototype, I mixed both of them.

Every entity is stored in a single array.  Each and every entity must use less than 1024 bytes.

In the stress test, I reserved it [Entity 1st=1024 bytes][2nd=1024]....[1024] = 30,000 contiguous block.

Ex. In some bad cases, it can be :-

  • Entity 1st= [Position  8 bytes][Velocity  16 bytes][ waste 1010 bytes ]
  • Entity 2st= [GraphicHuge 524 bytes][Position 8 bytes][ waste 492 bytes ]  ....

There are some ways to optimize it (e.g. buddy allocator), but in nutshell, it is like that.

Share this post


Link to post
Share on other sites

For the record, after optimizing my own ECS I can say the following:

Most performance can be gained not by how you layout components, but how by optimizing iterations. For example, if you want to iterate over physics + position at some point:

for(auto& entity : entities.WithComponents<Position, Physics>())
{
}

A primitive approach might build a dynamic list by checking all entities, (=vector), return ths & then iterate over it again.
A better solution might involve iterators to save on the dynamic allocations.
An even better solution involves building the set of entities with those components on the first call, and then just invalidate or update this cached set whenever entities are modified.

This step gave the most performance in my case, and should be pretty mandatory of you want to handle 30000 entities, which will likely have vastly different sets of components.

 

Storing the components in a contingous array actually reduced the performance at first, as some have mentioned, due to the overhead of accessing them from different places. What I actually ended up doing was putting the pointers to the components inside my cache-set to, using a templated-solution to generate a cache-structure like this

struct Cache // "generated" by template
{
	std::vector<std::tuple<Position*, Physics*>> components;  
}

The components itself are stored in a contingous array, and put into this cache. Which is, I believe, the fasted you can get when iterating over distinct sets of components, as the this cache-array is as tighly packed as possible under those circumstances, and works with the least . Note that the actual implementation is really complicated, especially the part about updating the cache (since I didn't want to rebuild it on every change), but the results sure paid off in terms of performance. I don't have the exact figures, but all optimizations together were about 40% decrease in frametime for a testset of 10000 entities with animation/static sprites & collision.

You should decide for yourself if its worth the trouble, as others have said, getting the system to be useable should be the priority. Though if done right, you can apply all those optimizations in the background w/o affecting the systems interface, so once you see a need you can still resort to optimizing.

Share this post


Link to post
Share on other sites

@Juliean

Thank!  It is consistent with the profiler result.

I always thought that the profile result is wrong, because I use stack-allocator to allocate this as return result :-

entities.WithComponents<Position>()

Copying array shouldn't very expensive, right?   I was wrong.

Thank a lot, Juliean!!!   If you didn't say it explicitly, I will ignore it .... perhaps forever.    :)  :)  :) 

Edited by hyyou

Share this post


Link to post
Share on other sites
9 hours ago, hyyou said:

That's just another rewording of the unwieldy T-Machine stuff, and the answers below it tell you all you need to know:

  • "The architecture was more of a pain in the butt than naive object systems and had no measurable performance advantages over some of the more flexible designs I compared it to."
  • " What you've done is re-engineered C++ objects. The reason why this feels obvious is that if you replace the word "entity" with "class" and "component" with "member" this is a standard OOP design using mixins."

There's no silver bullet that will let you handle 30,000 heterogeneous entities without compromises. If a large number of those entities are homogeneous then you might be able to smash those, or the 'hot' data in them, into some contiguous arrays and see some significant benefit. But in the general case, what you want to do isn't really doable.

Remember there are simpler ways to get some cache-coherence benefits - store important properties inline instead of via pointers, move lesser-accessed data outside of the entities so that the 'hot' data is stored more compactly, allocate things from pools instead of via naive `new` calls, etc.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!