Jump to content

  • Log In with Google      Sign In   
  • Create Account






New Changes in dustArtemis

Posted by TheChubu, 26 June 2014 · 529 views

dustArtemis artemis ecs java entity component system
New Changes in dustArtemis Hi! In this entry I'm going to review the latest changes in dustArtemis and some thoughts on potentially big performance issue.

dustArtemis


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

All the changes!
Attached Image

Not terribly interesting actually Posted Image Probably the only commit worth mentioning is the first one.

Are you really sure you want to process this system?

Vanilla Artemis had a kinda silly situation. For each world "tick", only the active systems are processed, more or less like this:
for ( System system : systems ) {
    if ( system.isActive() ) {
        system.process();
    }
}
Disclaimer: Using K&R for space reasons, not because I like it Posted Image

Now inside process method, this silly thing happened:
if ( checkProcessing() ) {
     reallyProcessSystem()
}
Immediately inside the system, another check happened. Basically, there was two "levels" in which the system could be active. First one was defined by a simple "active" flag (actually, it was called "passive" but I digress...) that just told to the World instance, "Hey dud, process me!".

Now this second check wasn't defined by a simple flag but by an overriden method. So if you inherited from EntitySystem, you had to provide your own checkProcessing method that just returned true on 95% of the cases.

I understand it had a purpose. In the IntervalEntitySystem, the "active" flag was just what it sounded like, but the checkProcessing method was the one that checked if enough time had passed for the system to actually do something.

It seemed like a kinda shoehorned solution to a specific problem, I just decided to get rid of checkProcessing method. Moreover, that specific problem is already taken care of by Artemis, just use the begin method.

You're just going through a phase

EntitySystem class provides a few hooks for additional processing beyond the usual "for all entities: do something". The process method actually looks like this:
public final void process (){
	begin();
	processEntities( actives );
	end();
}
Default begin and end methods do nothing, you're free to override them. So, I just added a new boolean flag to IntervalEntitySystem, and made the begin method do the time interval calculation to see if it was time for the system to process the entities. I just needed to add "if isTime: process entities" to the processEntities method.

So, about that performance problem...

That was quite long for a 3 line change in the codebase right? Well, there is something a bit more interesting, entity removal and modification.

Adding, removing and changing entities entails the following procedures:
  • Notify World instance about the change.
  • Notify all Systems about the change.
  • Actually add/remove the Entity in a system's list of entities.
The second step involves a check method call in all system. It verifies if the Entity has all the required components for the System to be interested in it. While the check itself is kinda lengthy, its quite fast.

Adding entities is quite fast too:
private final void insertToSystem ( final Entity e ){
	actives.add( e );
	e.systemBits.set( systemIndex );
	inserted( e );
}
Add to the list, set a bit in entity's BitSet, then call inserted method. In the worst case, actives backing array gets resized, it will only happen a few times, mostly at level startup.

Now removal, that's the ugly one:
private final void removeFromSystem ( final Entity e ){
	actives.remove( e );
	e.systemBits.clear( systemIndex );
	removed( e );
}
That actives.remove( e ) call? Fucking. Linear. Search.

This means, if for some reason (say, remove a tag component), you change a bunch of entities so a bunch of systems won't be interested in those entities anymore, they'll get removed in each of the sytems's actives arrays by linearly searching for each entity you want to remove.

It works okay for at most a couple hundred of entity changes if you're using a good CPU. Now, if you want to change a couple thousands, it won't work.

Test case, I added 200k entities and changed a single component for all of them, it took eight seconds to remove them all from a single system in my Intel i5 2500. And you thought that 100ms spike was bad enough!

Gettin' solutions

Being reasonable, it won't be frequent to add 200k entities and change all of them in a single go, but you will have a couple dozens systems and you can easily see how the cost would add up. Suddenly, you have to think carefully about removing a component from an entity, nevermind if you have to change lots of entities.

The idea behind ECS is that these changes should be possible, flexibility should be king, so there has to be a way for this process to be more efficient. The essence of actives Bag is that its an unordered array, iterating over actives is efficient.

Trying to maintain the actives Bag, I thought about a few additional structures that could solve the problem, or amortize it a bit:
  • System Knows Best
Simply put a HashMap in System, and map each Entity instance to an index. Insertion and removal would become this:
// Remove entity from the index map.
int i = indexMap.remove(e);
// Remove entity by index.
actives.remove( i );

// If there are more Entities.
if ( !actives.isEmpty() ){
	// Update moved entity index.
	Entity tmp = actives.get( i );
	indexMap.put ( tmp, i );
}
// Clear system bit and call removed event.
e.systemBits.clear( systemIndex );
removed( e );
Bag retains no ordering, it implements removal by simply replacing removed position with the last item in the array. So, if you remove an entity from the middle of the Bag, some other entity will have its index changed by the one you just used to remove the previous entity.

The pros of this is: Friggin fast removal. 200k removals? IIRC, time went down to 200ms.

The cons of this: More than duplicated memory usage. HashMap uses Entry objects for the stuff it stores, so you'd have around 30 additional bytes per active entity on each system.

You will pay one hash computation per addition and two hash computations per removal. For all added/removed entites for all systems. Always.
  • Entity Knows Best
There are two versions of this one. First one that comes to mind would be put the HashMap on the Entity instead, all Entities would know their indices on all the systems they're active on. Problem is, memory impact would be worse, if you have 100 systems and 100k entities, instead of having 100 HashMaps you'd have 100k HashMaps. Which is bad.
Second version still involves linear search but to a much lesser degree:

Each Entity would have a Bag of a small [system, index] tuple for each system they're active on. So, when a system removes an Entity, it would work like this:
// Remove SystemIndex pair which has this system.
SystemIndexPair siPair = e.systems.remove((pair) -> pair.system == this );
// Retrieve index.
int i = siPair.index;
// Remove entity by index.
actives.remove( i );

// If there are more Entities.
if ( !actives.isEmpty() ){
	// Update moved entity index.
	Entity tmp = actives.get( i );
	SystemIndexPair tmpPair = tmp.systems.remove((pair) -> pair.system == this );
	tmpPair.index = i;
}
// Clear system bit and call removed event.
e.systemBits.clear( systemIndex );
removed( e );
Pros: Two very small linear searches per removal (as big as the amount of systems the entity is active on). No hash lookup for addition like the previous solution, just a simple Bag.add call.

Cons: It might be as costly as a HashMap in memory terms (new SystemIndexPair per entity active per system).

Conclusions

It seems to me that these solutions aren't an overall win for all cases, more like a bunch of tradeoffs:

It is possible that Entity removal/addition/modification as it is will work best for a small amount of entities, then the SystemIndexPair solution would work best for a bigger set of entities, and then HashMap solution would work best for an even bigger set of entities.

Well, that's enough writing for today, see you in the next entry!




PARTNERS