Jump to content

  • Log In with Google      Sign In   
  • Create Account

dustArtemis ECS Framework

New World instance builder

Posted by , 03 April 2016 - - - - - - · 1,064 views
dustArtemis, java, ecs, artemis and 3 more...
New World instance builder

In this update I'll talk about the new interface for creating World instances, the World.Builder!



dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.


Old World

World instances were created on the spot before, like this:

World world = new World();
// System, and 'enabled' flag.
world.addObserver(new PhysicsSystem(), true);
world.addObserver(new GraphicsSystem(), true);
world.addObserver(new SoundSystem(), true);

// Then just
while(true) {
Fairly straightforward.

Thing is, this class was quite fragile:
  • You could add and remove systems whenever you wanted, so you had to check on both the list of observers and the map of observer types inside.
  • You could add an observer of the same type of an existing observer, in which case you'd need to evict the existing one, then add it.
  • World.initialize() was single use only, you could add an observer, initialize the world, then add another observer, and break it, since the last observer wouldn't get the related component mappers injected, nor would have its 'init' method called.
  • You couldn't control the iteration order. If you removed an observer then re-added it, it'd always get processed last.
In short, too many movable parts and easy to break.

New World

I decided that I wanted a very specific initialization step for the World instance, moreover, now I had a very specific need: I wanted to initialize observers in a specific order, and I wanted to process observers in a specific albeit different order. The issue presented itself when I wanted to initialize the renderer before a few systems that depended on it, but I wanted to actually render after those systems were processed in the game loop.

I also decided World would be immutable. You configured it, got your instance, and that's it. No observer tracking inside nor any sort of checks at runtime, just one initialization step and you're done.

Ended up using the Builder pattern to create an additional mutable object that held all the data the World needed, which implemented a "build" step that created your immutable World instance. Interface looks like this now:
World world = World.builder()
  .observer(new PhysicsSystem(), 1)
  .observer(new GraphicsSystem(), 3)
  .observer(new SoundSystem(), 2)
Enabled/disabled state is handled by the observer itself, and now you can specify an order number.

You got two flags, 'initializeByOrder' and 'processByOrder', if any is set to true, your observers get initialized/processed by the provided order respectively. What happens if you set it to false? They get initialized/processed by order of appearance. That way you can have one order for processing and a different order for initialization. Hooray!

The 'build' step just sorts the observers as needed, initializes them, then creates the World instance passing the sorted observer array, which will indicate the processing order. The builder makes sure each observer passed isn't null, so to avoid any other checks later. The obtained World instance is immutable, ie, you cant add/remove observers, nor set the 'data' field, which brings me to...

Data passing

Another feature I wanted is arbitrary data passing. Each system has a World instance, and you *can* extend that World instance to add whatever you needed, but it becomes annoying to use when you have to downcast it like "((WorldSubClass)this.world).myMethod()" each time you have to use it inside an EntityObserver, since observers only know about "World", not any subclasses of it. So instead I added an additional (nullable) field:
World.builder().data(new SharedWorldData());
'data' is an object field, so you can put whatever you want in it. This also deprecates the old "delta" field in World, if you want to keep track of delta times, make your own data objects that does it.

You can use it like this:
// Inside some EntityObserver
SharedWorldData data = this.world.data();
float delta = data.delta;
WindowSettings settings = data.windowSettings();
'data()' is a generic method, it casts to "T" inside. So in theory you could do "SomeOtherClass data = this.world.data()" and it would compile, but you'd get a ClassCastException at runtime if data isn't of SomeOtherClass type. Its a fair trade-off I think.

Another nice thing also is that that 'data' object can be used as a context, to say, share data among systems, create your own event system, whatever you need. I currently use it to share the resource manager, a window object reference and time deltas between frames for example.

Well, thats the new World Builder, I'll describe the new Injector later, cya!

Shuffling class responsibilities

Posted by , 24 January 2016 - - - - - - · 1,237 views
dustArtemis, java, ecs, artemis and 3 more...
Shuffling class responsibilities

In this update I'll talk about the latest release of dustArtemis, which changes a bit the component handling.




dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.


ComponentManager and the old ways


This class had quite a few things bolted on as time passed. As a recap, remember that dustArtemis needs a couple of things to link an entity to a component:

  • Entity id
  • Component collection where to set the component.
  • Entity's component bitset that mark what component types an entity has.
You had two ways to add component to an entity:


// Manager reference.
ComponentManager cm = world.componentManager();
// Spatial mapper reference.
ComponentMapper<Spatial> spatials = world.getMapperFor(Spatial.class);
// New entity id.
int id = world.createEntity();
// We want to add this component to the entity.
Spatial s = new Spatial();
// First way:
cm.addComponent(id, s);
// Second way:
cm.addComponent(id, s, spatials);


The issues here aren't obvious, first way of adding a component internally does a HashMap lookup to fetch the index of the component's type. It works something like:


// Hash lookup Class → index.
int mapperIndex = indexOf(component.getClass());
// Add the component to the entity.
componentMappers[mapperIndex].add(entityId, component);
// Now flip the appropiate bit in the entity's component bitset.


Great volumes of these lookups end up showing in the profiler. The worst case of this you can see in some ECS frameworks which use hash maps to link components with entities, which ends up in a hash lookup on each component access. So we want these to be as fast as possible.


The second way is the fastest since the hash lookup is made when calling world.getMapperFor(type), which can possibly be made only once at initialization and just store it as a field. We also need a ComponentManager reference, so internally the ComponentMapper does something like this:


// Add the component to the entity, mapper knows its index.
componentMappers[mapper.index].add(entityId, component);
// Now flip the appropiate bit in the entity's component bitset.


No hash lookup, but the user needs a reference to both the mapper and the component manager for it to work.


The new way


What we want is to have only one way to do this, so we push some load onto the ComponentMapper until it becomes the new and shiny ComponentHandler (terrible names I know).


We know that in each system we will have the mappers of the components the system deals with. These get “injected” in an initialization step automagically by dustArtemis, you just have to declare it like:


// Field in some EntityObserver.
private ComponentMapper<Spatial> spatials;


A proper instance gets injected into that field in your EntityObserver and all works. But the issue was that we also needed to fetch the ComponentMapper.


What I did is simply to push a few things onto the ComponentMapper, renaming it to ComponentHandler now since it not only does mapping anymore. With our ComponentHandler we can do the following:


// Gets injected at runtime.
private ComponentHandler<Spatial> spatials;
// … then latter in some entity observer method.
int id = world.createEntity();
Spatial s = new Spatial();
// Just add the component using the handler itself.
spatials.add(id, s);


The handler internally does something like this:



this.data[id] = component;


It knows both its index and the component manager that “owns” it. It has to access the component bits through the manager instead of having an array reference so the manager internally can resize and initialize the bitsets as needed.


The only way to get a ComponentHandler is to call world.getHandler(type), so there is no way to create many of them. And thanks to the Injector class, they'll most certainly will be placed in all places they're needed.


The *other* new ways


dustArtemis 1.1.0 has a couple of other things I'd like to talk (new World builder, the new Injector class, etc) but that will be on another entry, cya later!


EDIT: As always the editor is a piece of crap. Can't remove the italics and some line jumps get inserted randomly : |

EDIT2: Friggin editing the BBCode by hand worked Posted Image


EDIT3: No it didn't. WHY it inserts new lines on the BBCode!?

Entities can be a plain integer too!

Posted by , 22 February 2015 - - - - - - · 1,508 views
dustArtemis, ecs, artemis, entity and 3 more...
Entities can be a plain integer too! In this update I'll talk about the latest features of dustArtemis, int based entities and component pooling.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

Entity Objects

Artemis used Entity class as a sort of abstraction, it gave the impression than entities were objects rather than plain IDs. Entity objects were kinda heavy weight to be fair, they had a reference to a ComponentManager, to a World instance, an UUID instance, a bit set instance for keeping track of components and a long ID.

First two were for providing convenience methods say, entity.addComponent(cmp) instead of world.getComponentManager().addComponent(entity,cmp) for example, then the UUID was probably there for serialization purposes.

This makes creating Entity instances kinda cumbersome, whatever thats in charge of creating it needs a World instance, a ComponentManager instance and the Entity will create an UUID instance and a bit set instance by itself.

First step I took is narrowing its scope: Getting rid of the UUID (you can always add it as a Component if you want) and using a plain 'int' ID. So now we're left with the World and ComponentManager instances.

This situation makes Entity instances ideal for pooling. At least in my mind, dustArtemis should provide fast and hopefully garbage free ways to make new entities, associate entities with components and process them.

Complementing Features

I implemented Entity instance pooling into the framework. I also added a nifty thing, a sort of ID allocator that would guarantee that every time you needed a new Entity it would have the lowest free available ID.

This is great since ever incrementing IDs are awful for the backing component arrays (they'd always get bigger and bigger for holding ever increasing entity IDs).

Integer Entities

Now the next step in my trail of thought was: If the only things in Entity instance are World and ComponentManager instance, and they're only there for providing convenience methods, Entity now was just behaving just like a Java Integer instance, ie, a crappy boxed int.

What if I just remove such conveniences and just use plain 'int' IDs for representing entities?
// This turns this code:
Entity e = world.createEntity();
e.addComponent( new Position() );
// Into this:
int eid = world.createEntity();
world.componentManager().addComponent( eid, new Position() );
More verbose? Yeah, kinda, you should totally hang onto that ComponentManager reference, easier to do it that way. It also creates no garbage and no pointer indirection to fetch the ID.

Moreover, it removes the necessity of doing entity pooling, now the ID allocator does it for free since its what it was doing in the first place, managing unused ranges of IDs.

Thing I sidestepped was the bit set instance, that was actual useful data that its needed to keep track of the components of each entity. So I made it a ComponentManager detail, it holds an array of bit sets, and by just indexing it with the entity ID it can find out what its components are.

This makes a round trip through the ComponentManager necessary when creating entities, since it needs to create the bit instance for that entity. Inside that World.createEntity call, it notifies the ComponentManager passing the new entity ID, so it can do a null check to see if the entity has a bit set at that index, if it doesn't, it creates a new one. Not very pretty but straightforward.

Pooled Components

There are some components that might be handy if they were pooled. This is the interface I came up with for these cases:
// Registering a pooled component.
world.registerPoolable( Position.class, Position::new, Position::resetPosition );
// Adding a pooled component to an entity.
int eid = world.createEntity();
world.componentManager().addPooledComponent( Position.class );
Most defintively not in “fluent style” but it works well enough Posted Image

registerPoolable needs a way to create new components of that type, that's why the second parameter is a Supplier<T> of that component, which can be a static factory method, a reference to Position's constructor like in the example, an object that implements the Supplier<T> interface, and so on. Pretty fexible.

Third parameter is optional, its a Consumer<T> that can manipulate the component when the ComponentManager fetches an existing one from the pool, in the example its a reference to a static method in Position. You might want to say, reset the position component to 0 before its gets used by another entity, or you might deem it an unnecessary cost, in that case you can just avoid providing a resetter.

This has an impact in the ComponentManager since now you cant just iterate over an entity's bits and remove them, since some of them might be pooled. So now entity “cleaning” (which happens when an entity is deleted) is done in a two step process: First pooled components are removed and returned to the pool, then regular components are removed.

This became rather easy using the fixed bit sets introduced in the last entry, just copy the entity bits, & with pooled bits, iterate and return to the pool.

Imagine the Possibilities!

Together the fixed bit sets, plain int entities and the way ComponentManager handles components and mappers allowed for a few tweaks around the framework plus a few shortcuts I added to do some operations more efficiently (for example, adding/removing components without hash lookups).

That will be for another entry though, cya!

Entities and tracking their components

Posted by , 29 November 2014 - - - - - - · 1,309 views
dustArtemis, artemis, entity and 4 more...
Entities and tracking their components In this update I'll talk about the latest feature of dustArtemis, fixed length bit sets for component tracking!


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

Component-Entity relationship

dustArtemis still uses original Artemis way of dealing with the relationship between entities and components. Each entity contains a bit set, and each Component type has an index. When an Entity contains a component, its type index is set in the component bit set of said Entity.

For example, say that Position component index is 3, so the third bit in all entities that have a Position component will be "turned on". Highest index you will ever use will amount to how many different component types you have. If you have 30 different component types, only the first 30 bits of these bit sets will ever be used. Of course, of each of these component types you can instance as many as you need since arrays of components are unbounded.

This works very well because its a compact way of handling which entity has which component. Its also very practical for detecting when an EntitySystem is interested in an Entity. EntitySystem uses an Aspect as matcher for component bit sets in Entities, if the bit set compares favorably to the bit sets of the Aspect, EntitySystem adds the Entity to its 'actives' Entity list.

These comparisons are quite fast, albeit making the logic of handling each Entity insertion/removal a bit branchey. In dustArtemis I left this part a bit up to the JVM. Insertion/removal checking its done with 2 bit flags and a switch block. HotSpot can profile them in runtime and decide if it will use a jump-table, a series of comparisons, and in which order they'll be made given which is the one most likely to be taken.

Enter OpenBitSet

Original Artemis used JDK's standard BitSet class. This imposed a few restrictions. For example, you weren't able to retrieve the backing 'long' array of the BitSet for easy iteration. Most operations modified the bit set used, ie, you couldn't do AND/OR checks along the bits of an array and 'break' on some conditions, but you had to use a temporary BitSet, copy the contents of one BitSet to it, AND/OR it with the other BitSet, then check the results.

I decided to take HPPC's route (a "primitive collections" library) and fork Apache Lucene's bit set, OpenBitSet (that's why there is now an org.apache package in dustArtemis sources).

OpenBitSet provided many operations that didn't modified the bit set, and exposed the backing 'long' array if you wanted to use it directly. It was designed for very, very big sets (ie, using 'long' to index bits for example), so I trimmed down its implementation to make it more compact.

This was quite cool because now I could implement a simple iterator over the backing arrays directly (used, for example, to keep track of active entities in a system) and use non-mutating operations in Aspect checks.

Now the downsides...

While this was great for the few places where big bit sets were used, but it wasn't quite nice for Entity's component bit sets.

How many different kinds of components will you have? 20? 50? 100? Say that you have between 64 and 128 components, that means that you need 128 bit long bit sets, ie 2 longs, 16 bytes.

If your idealized cost of tracking what component types an Entity has should be 16 bytes (per Entity instance), how does practical usage measures up having in mind Java's lack of value types?

What the government doesn't wants you to know about JVM's costs

Have in mind we're using a 64 bit JVM here as reference, and that the JVM does 8 byte alignment regardless of architecture.

Cost for an OpenBitSet instance is:
  • 12 bytes for object header.
  • 4 bytes for "int wlen" field.
  • 8 bytes for "long[] bits" reference.
24 bytes for an OpenBitSet instance.

Now, we have to measure that "long[] bits" field since its another whole object!
  • 12 bytes for object header.
  • 4 bytes for standard "int length" field of arrays.
  • 16 bytes for our 'long' data (8 bytes per 'long' * 2 'long's we're using for this example).
That's 32 bytes.

32 bytes + 24 = 56 bytes in total for an OpenBitSet instance compared to the 16 bytes of data we actually want. Nevermind the additional indirection of fetching the 'bits' array instance from the bit set instance.

Enter FixedBitSet

This is an experimental implementation of these, so they might not be the best possible course of action to fix this issue, but its a start!

You got 4 types of FixedBitSet, for 64 bits, for 128 bits, for 192 bits and 256 bits. Obviously you're nuts if you have more than 256 component types.

The inheritance order is:
  • FixedBitSet
  • FixedBitSet64
  • FixedBitSet128
  • FixedBitSet192
  • FixedBitSet256.
Each inherits the implementations of the previous one ('cept for FixedBitSet since its mostly abstract, so there isn't much to inherit), and only has to add the methods that deal with the newer longer bit set (ie, no need to implement ANDing between two 64 bit sets in FixedBitSet128 since FixedBitSet64 already implements it).

Now, based on the cool implementation of OpenBitSet, we can do a lil' bit better by implementing fixed length bit sets. There are two places where we use small bit sets: Entity instances and Aspect instances, they're both used for tracking component indices.

Issue is, these bit sets are created without external help. You shouldn't know the bit sets exist at all. So we need a little bit of help of DAConstants here. Instead of having this "APPROX_COMPONENT_TYPES" constant, you'll have to provide an actual accurate count of component types, which will be loaded to an static constant that will be used through dustArtemis to decide what exact size of bit set to use.

Inside the framework, the problem still exists. Everywhere except when they're initialized, code can't know if its dealing with FixedBitSet64 or FixedBitSet256, we only know we're using a FixedBitSet that has a common set of operations (pop count, and, or, andNot, etc).

This is essentially a problem that multiple dispatch could solve, this:
FixedBitSet a = new FixedBitSet64();
FixedBitSet b = new FixedBitSet192();

Should only AND 'word0' of both bit sets, which is essentially an “FixedBitSet64#and(FixedBitSet64)" call. But how does 'a' knows that the FixedBitSet 'b' is of type FixedBitSet192? Short answer is: It doesn't. Java uses single dispatch, so it can't know what method to call based on the runtime type of the passed parameter.

A reasonable middle end is that we know for a fact that there won't be different types of FixedBitSets, if you specify that you will use 64 or less component types, there will only be FixedBitSet64 instances in the framework.

So this implementation becomes reasonable:
public void and ( FixedBitSet bits ) {
    this.and( (FixedBitSet64) bits ); // Calls 64 bit long implementation.
And overriding 'and' in each subclass of FixedBitSet so it does the cast to its own type.

Obvious issue here is that if you do that call passing a smaller bitset than the bitset you're ANDing, you get an exception. (ie, ANDing a 128 bit set with a 64 bit set, tries to cast it to 128 bit implementation and BAM, ClassCastException).

So instead I decided to do some indirection to get around the issue. In paper its a combinatory explosion, but its lessened by the fact that each bit set is an extension of the previous one, so you don't need to do both 64 bit set 'OR' 128 bit set, and vice versa implementations. Just one side is enough.
// Inside FixedBitSet64.
public void and ( FixedBitSet bits ) {
    bits.andThis(this); // We know 'this' is FixedBitSet64 but not 'bits' type.
// Now inside FixedBitSet128.
public void andThis ( FixedBitSet64 bits ) {
/* Here we know 'this' is FixedBitSet128, so the proper FixedBitSet64.and(FixedBitSet128) call is made. */
This transforms a simple call into two layers of indirection but HotSpot knows very well how to inline these, given that we're always using the same subtypes of FixedBitSet everywhere, so it will become a call to the concrete implementation of the 'and' method we want.

Yeah, and what did we gain from this again?

First and foremost, spaaaaaaace! Remember that an OpenBitSet capable of holding 128 bits had a size of 56 bytes? Lets see how much FixedBitSet128 uses:
  • 12 bytes for object header.
  • 8 bytes for 'word0'
  • 8 bytes for 'word1'
28 bytes, padded to 32 bytes. That's it. Moreover, the additional 'bits' array pointer indirection? Not a problem anymore.

FixedBitSet256, biggest one implemented, is 48 bytes, still smaller than the 56 bytes necessary for an OpenBitSet instance of 128 bits.

Not even mentioning that all operations (AND, OR, pop count, etc) in all FixedBitSets don't even use for-loops anymore since they're all implemented for the given size of the bit set. So there won't be any ugly size checking nor branching in the code that will get executed.

What's the catch?

Hopefully, none at all! The only thing the user has to do is simply putting in their dustArtemis config file how many component types they have (if left unspecified, defaults to 64).

Well, that's all for this entry, as always you can check out dustArtemis repository, see y'all later!

Entity pooling and EntityObservers

Posted by , 18 October 2014 - - - - - - · 1,292 views
dustArtemis, Artemis, java, ecs and 3 more...
Entity pooling and EntityObservers In this update: Entity pooling and refactoring EntityObservers.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

Entity pooling

Well, in an ECS framework, first thing comes to mind when dealing with pools is pooling the entities.

One of the changes in dustArtemis compared to original Artemis is the IdAllocator, this IdAllocator is sort of a free list memory allocator but in charge of ID numbers for entities. It keeps track of the range of IDs (in the positive 'int' space by default) that can be assigned to entities, and each time you try to "allocate" an ID, it will return the lowest free ID available.

This is very important since IDs are used for indexing into component arrays, while BoundedBag gets rid of some of the hassles, if the provided IDs are disperse, the Component bags will be too.

EntityManager is in charge of creating new Entity instances, it allocates a new ID from its IdAllocator and returns a new Entity instance. When Entities are deleted, it frees their IDs.

The new PooledEntityManager now pools Entity instances, for this Entity id had to be made mutable inside the framework, so the manager could assign it the lowest ID number possible and return it as if it were a new Entity.

Now, this is configurable thanks to two new constants in DAConstants: POOL_ENTITIES, a boolean which indicates if Entity instances are to be pooled or not, and MAX_POOLED_ENTITIES, an int value that indicates the max amount of entities that will be held in the entity pool.

If you set the limit to, say, 2000 entities, the manager won't store more than 2000 entities that are deleted. If you delete more than that, it will discard them.

By default its set to Integer.MAX_VALUE, ie, without effective limits. This means that the pool will peak at more or less the amount of entities you have alive in the entire time its running. This value might be okay with you, or it might be not, thus why you can configure it if you want.

But you shouldn't pool objects!

It depends on your requirements. In a videogame where a few millseconds of GC is the most annoying thing ever, yes, you do. This is not because Entity objects are particularly heavy, this is for reducing GC pauses.

For example, in my system, there is no apparent difference in performance when pooling entities vs not pooling them at all. Allocation is fast in the JVM, really fast. The issue is when those entities are being collected. If you go crazy, ie, each bullet is an entity, each particle is an entity, you'll have lots of GC pauses. Thus why pooling might be necessary.

Now, this is a double-edged sword. Pooling objects mean they'll get promoted to "old generation", since they're long lived. This is an issue because GC runs in "eden space" are quite fast. "old generation" GC pauses are longer. Thus you'll be increasing the work the JVM has to do in the "old generation" space.

As with everything: Profile, VisualVM is your friend.

Managers and Systems

This is something I thought it was kinda silly. EntityObservers define the various "events" all systems/managers can respond to: To added entities, removed entities, changed entities, etc.

From here two "branches" sprout, the ones in the form of EntitySystems, and the ones in the form of Managers. So far so good.

Thing is, the only difference between EntitySystem and Manager class was that EntitySystems get "processed" and can be "active" or not. As for the rest, both implemented EntityObserver, and both defined their own World field. Another difference is that World processed managers first, then entity systems. Also EntityManager was a Manager, thus an EntityObserver, so it was notified when entities were added/deleted.

So I just moved up the responsibilities! Now all EntityObservers hold a World instance, all EntityObservers can be processed, initialized and disposed of. EntityManager gets initialized in the World constructor and its added first in the chain of the observer update list.

This removes the necessity for two separate HashMaps/Bags of "systems" and "managers" in the World instance. Now you just add observers to the world, regardless if they are "managers" or "systems", essentially unifying the way World operates with them and simplifying the code.

Now, the future!

I still have more things in mind: First, Component pooling. Now this one is much more trickier, Components are defined by the user, thus they can be anything, from GPU buffers to sound files, or just simple 3 float array for positions. Some of these make sense to be able to pool, some of these don't. The user needs to be able to configure that. Which components get pooled and how.

Also there is the other thing I have on the "Issues" page in the repo, Maven integration. Not that I use Maven, but it will be necessary for integrating dustArtemis with junkdog's entity system benchmarks. Plenty of people use Maven so I suppose it will be a good learning experience.

Thats it for this entry, cya later.

Moved to GitHub!

Posted by , 28 September 2014 - - - - - - · 1,069 views
dustArtemis, artemis, java, ecs and 3 more...
Moved to GitHub! Small update, moved the repo to GitHub, and config files.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.


Well, I moved the repository to GitHub, its pretty nice for public projects. I'm still getting used to git/GitHub particulars, like releases, tags, slightly different branch management, etc.

Configuration file

If you go to 'configfile-exp' branch, there are two commits that add the possibility of setting some of the constants used through dustArtemis from a configuration file, without changing the sources.

There are plenty of things around that are very specific to each project. For example, if you plan to have around 1k live entities in your World instance, does it makes sense to initialize entity bags with a size of 16? Not really. You're pretty much guaranteeing that plenty of arrays will be trashed at application startup.

There are also things like, if you're using Bag container, it has a fixed constant for switching from 2 times its capacity to 1.5 times its capacity when it has to grow (previously, 2048).

Probably that's not good enough for everyone, maybe you don't mind 2 times growth so you want to put the threshold higher. Now its possible to specify this in the configuration file.

Hands on...

An example of the contents of a configuration file would be:
That raises the default capacity of subclasses of ImmutableBag to 64 elements from the default 16, raises the grow rate threshold to 4096 from the default 2048, and lowers the component types that will be present to 48 from the default 64.

All constants have minimums (so you can't set one to -2 for example), and reasonable defaults in the case you don't specify them in your config file or just plain don't use a config file.

Config files have to be stored in plain text, and its relative path (from the application's POV) has to be set in "dustArtemis.cfgpath" system property. This is using the properties API from Java, so you'd point to your config file like this:
System.getProperties().put("dustArtemis.cfgpath", "myFolder/myConfigFile.cfg");
Remember that dustArtemis constants get loaded when the DAConstants class is loaded. So if you use any other part of dustArtemis before adding that property, you might trigger DAConstants class loading before you need to (costants will take the default values instead of the ones instead of the ones in your file).

So before doing anything in your project, put that property in Java's system properties if you want dustArtemis to load the constants from your config file.

Have in mind most of these values aren't limits. Nothing bad will happen if you specify "30" as the component types and you add 31 different component types to entities. These are just starting values for the most part, so you can avoid annoying array reallocation on application startup.

As always, you can check the sources to see where specifically each constant is used in the framework.


that's all. See ya later! Posted Image

Updates! Puppies!

Posted by , 20 September 2014 - - - - - - · 915 views
dustArtemis, artemis, ecs, java and 3 more...
Updates! Puppies! It's been a while since I updated this journal, not because I didn't had anything to write about though. So, recap on updates it is.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.


I kinda lied, no puppies sadly. I got several updates tho share though:

Ordered Iteration

The principle of the 'Bag' collection is that disregards item order inside. If you remove an item, it doesn't shifts elements to fill the open slot, rather it just puts the last element in the array there. This has the tendency of simply mixing the element ordering on removals.

There is a good reason for this, copying an element is much less work than copying whatever's left of the backing array one position to the left to fill the gap.

My solution to entity removal times in original Artemis (basically, linearly searching through entire entity 'Bag' in a system) was simply reducing the linear search. Entities stored the position they were located on each system, and on removal, the system would just linearly search for itself in the entity's array to grab the index of the entity. This reduces the linear search from possibly thousand elements to just a few dozen at most.

This still mixes up the contents of the entity 'Bag', so to preserve ordering I decided to go implement ArrayList-like behavior on 'Bag' and just use binary search on the entities. Turns out that even with the whole array shifting, ordered iteration of entities is actually faster than whatever gains I had on entity removals with the previous method.
private final void removeFromSystem ( final Entity e ) {
    // Binary search for index.
    final int ei = searchFor( e );
    // Remove entity, shift items to left.
    actives.eraseUnsafe( ei );
    // fastClear, new OpenBitSet method!
    e.systemBits.fastClear( index );
    removed( e );
private final void insertToSystem ( final Entity e ) {
    // Binary search for insert position.
    final int ei = -(searchFor( e ) + 1);
    // Make sure actives can hold the entity.
    actives.ensureCapacity( actives.size() + 1 );
    // Insert entity at found index.
    actives.insertUnsafe( ei, e );
    // Update system bits for entity.
    e.systemBits.set( index );
    inserted( e );
I wasn't totally convinced on the benefits of ordered iteration at first simply because JVM deals with references. So even if I am retrieving components and entities in order, there is no guarantee they're close in memory. The only way to guarantee locality is to use native buffers.

Say hi to OpenBitSet

BitSets are quite important in Artemis, they tell what component types an Entity has, what kind of entities an Aspect is interested on. Sadly, standard JDK BitSet has most of the operations implemented via mutating the state of the BitSet. So, if you wanted to check if an Entity had all the components an Aspect wanted, instead of just ANDing the bit sets, you'd need to check bit by bit in an ugly for loop.

Imagine this Aspect check (which was before in EntitySystem, I moved it to Aspect) is done for all entities, for all systems, for all component changes. That's a lot of checks! And the 'allSet' check is the one most often used.
* Check if the entity possesses ALL of the components defined in the
* aspect.
if ( hasAll )  {
    for ( int i = allSet.nextSetBit( 0 ); i >= 0; i = allSet.nextSetBit( i + 1 ) ) {
        if ( componentBits.get( i ) )  {
        // Entity system is still interested, continue checking.
        //Aspect is not interested.
        return false;
//Aspect is interested.
return true;
So ugly! That "nextSetBit" call isn't much prettier either it has an inner loop, a bunch of other checks, and it's done for all the components in the Aspect.

I decided to take HPPC's route and try to retarget a popular bit set implementation to my purposes. Apache Lucene project has a bunch of pretty general purpose utility libs in their sources, one of them is OpenBitSet, a bit set implementation that exposes the backing long array and also has a bunch of union and intersection tests that don't mutate the bit set's state, perfect!

Some refactoring was needed to reduce dependencies on other classes and was just left with BitUtils and OpenBitSet. Also removed most of the functionality that basically allowed you to have big ass bit sets using OpenBitSet. I don't need a 16Gb bit set, so plain 'int' indices instead of 'long' indices are fine for dustArtemis.

Now see the new implementation:

* Check if the entity possesses ALL of the components defined in the
* aspect.
if ( hasAll ) {
    final OpenBitSet all = allSet;
    * Intersection bit count between allSet and cmpBits should be same
    * if Entity possesses all the components. Otherwise Aspect isn't
    * interested.
    return all.cardinality() == OpenBitSet.intersectionCount( all, cmpBits );
So pretty! Two very simple loops are run, one for 'cardinality' call and one for 'intersectionCount', both use Long.bitCount which are very fast JVM intrinsic methods. And no mutated state at all, allSet and cmpBits are kept as they are.

General optimizations

There are a bunch of little tweaks here and there. Like for example, all entities are created through EntityManager. But EntityManager was notified of when an Entity was added like any other system/manager. Which meant that when the entityManager.added(entity) method was called, inside there was a check if the manager could hold the entity, otherwise grow the backing array of entities, copy everything, then add the entity to the manager. Totally unneeded!

Now 'ensureCapacity' is called for the entity Bag when an entity is created, so the Bag can hold the Entity when its inevitably Incorporated into the manager. No more size checks.

There are quite a few places when this idiom of "ensureCapacity" first, then do operations, can save up a few checks down the road. Nothing groundbreaking but can simplify code quite a bit in some places.

Also, all EntityObservers (ie, managers, systems), had these "added(entity)" or "removed(entity)" which were essentially events. When an entity was added or removed from the world, those methods were called on all systems/managers to see if they had to add/remove an entity. They get notified of any change in the World instance like that.

Now, say that you added 100 entities. For all entity systems and managers, you'd issue an 'added' call, for each entity you added. Say that you have 5 managers and 20 systems. That's 2500 'added' calls. In itself this isn't an issue, but it lands on a very special place for the JIT. Those are all megamorphic calls. Those are 25 different implementations of the 'added' method, so there is no way for the JIT to not to make them go through a vtable.

This was a very silly thing really. There are Bag of entities for all these "events". 'added' entities get pushed to 'added' Bag, 'removed' entities get pushed to 'removed' Bag, and so on. These all reside on the World instance. Whats preventing the World instance from simply passing down the Bag to the EntityObservers? Absolutely nothing. So now World just passes the 'added' Bag to the EntityObservers and they do whatever they want to do with it inside that call.

private final <T extends EntityObserver> void notifyObservers ( final Bag<T> observers ) {
    final int size = observers.size();
    final T[] obs = observers.data();
    // Call all the event methods on all observers.
    // added, change, disabled, etc, are all Bags containing entities.
    for ( int i = 0; i < size; ++i ) {
        final T o = obs[i];
        // Only one megamorphic call per event.
        o.added( added );
        o.changed( changed );
        o.disabled( disabled );
        o.enabled( enabled );
        o.deleted( deleted );
For our example case from before, instead of 2500 'added(entity)' calls, you'd only have 25 calls to 'added(addedEntities)'. Much better!

And that's all for now...

You can always check the sources out from dustArtemis repository, I try to comment all the commits so you can find short descriptions of all changes in the commit list. Cya!

ComponentManager v ComponentMapper: Dawn of Components

Posted by , 25 July 2014 - - - - - - · 1,142 views
dustArtemis, artemis, ecs, entity and 3 more...
ComponentManager v ComponentMapper: Dawn of Components Do you know in which results page dustArtemis appears if you search for “artemis framework” in Google? Absolutely in none of them! So we're going to celebrate by talking about the ComponentManager class.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

(to be fair if you google “artemis fork” its there like in the eight result but I doubt anyone Googles for forks exclusively, spoons on the other hand...)

Components, components, components!

ComponentManager is what it sounds like, it manages components in a way that isn't very memory efficient, but its quite fast for adding, removing and retrieving components.

It holds a Bag for each Component type there is. That means that in a single vector(ish) collection you have all the components of that type.

This makes it rather fast and easy to access components IF you know their index, and it happens that their index into the Bag is exactly the ID number of the Entity owner of that Component, how convenient!

The protagonists

Artemis organizes it in two classes really:

ComponentMapper is what EntitySystems use to access components when iterating over entities. All that is needed is a simple get() to have your component ready to use.
void process ( Entity e ) {
SomeComponent cmp = someComponentMapper.get(e.id);
There, direct index lookup, all the O(1) glory you can muster.

Now, ComponentMapper is just a thin veil over a particular bag of components that, as I described, originate from ComponentManager actually. Since ComponentMapper is used for access, ComponentManager's purpose is mainly adding and removal of components.
protected void addComponent ( final Entity e, final Component component ) {
    final int cmpIndex = ClassIndexer.getIndexFor( component.getClass(), Component.class );
    initIfAbsent( cmpIndex ).add( e.id, component );
    e.componentBits.set( cmpIndex );

protected void removeComponent ( final Entity e, final Class<? extends Component> type ) {
    final int cmpIndex = ClassIndexer.getIndexFor( type, Component.class );
    final BitSet componentBits = e.componentBits;
    // if entity has such component 
    if ( componentBits.get( cmpIndex ) ) {
        componentsByType.getUnsafe( cmpIndex ).removeUnsafe( e.id );
        componentBits.clear( cmpIndex );
Adding and removing components is a tiny bit more complex since what you know at the moment is the component's class, not its index.

All the indices!

Here it comes my own addition, ClassIndexer. ClassIndexer just does one rather simple thing, given a superclass, it indexes incrementally the provided subclass. So each subclass of Component will have its own incremental index. Since those indices are the ones we use to find the corresponding Component Bag, they can't be any value, so they start from 0.

This process involves a hash lookup, and in case you use it for something else, ClassIndexer is prepared for multithreaded access (original idea was to be able to create Entity instances on different threads but that idea fell through once I started to see all the side effects I'd have to take care of).

It was the same cost for original Artemis (well, HashMap lookup instead of ConcurrentHashMap lookup) except Artemis had two separate ways of dealing with the indexing for EntitySystems and Components (specialized inner class and ComponentType respectively), I just removed that code and made a single point where both get their indices.

I can probably jiggle things a bit around and get rid of it for this particular case (off the top of my head, HashMap<Class<? extends Component>, Bag<Component>>) but there are a few more places where indices per Component type are used.

After that, its smooth sailing, access directly by index, clear or set the corresponding component type bit in the Entity for removal/addition respectively (so EntitySystems can know if they're still interested in the Entity), and you're good to go.

Pre-emptive initialization

There is also another small addition of mine here, while fixing a few null pointer exceptions in this class, I choose to eagerly initialize the bags in 'componentsByType' so not to do null checks every single time.
private final BoundedBag<Component> initIfAbsent ( final int cmpIndex ) {
    final int prevCap = componentsByType.capacity();
    // If type bag can't hold this component type.
    if ( cmpIndex >= prevCap ) {
        componentsByType.ensureCapacity( cmpIndex );
        // Init all the missing bags.
        for ( int i = componentsByType.capacity(); i-- > prevCap; ) {
            componentsByType.setUnsafe( i, new BoundedBag<>( Component.class, 4 ) );
    return componentsByType.getUnsafe( cmpIndex );
Those branches are taken very few times, HotSpot can “prune” them when JITting the methods.

The End... Or is it?

Well, thats it for now, next entry I'll discuss a bit the issues with the "index by Entity ID" approach that I've encountered. Cya!

ComponentManager and its Impact on Modern Society

Posted by , 30 June 2014 - - - - - - · 1,296 views
dustArtemis, artemis, ecs, entity and 3 more...
ComponentManager and its Impact on Modern Society Hi! In this entry I'm going to describe the ComponentManager class and the changes I made to it in dustArtemis, including a few bug fixes Posted Image


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

In the beginning...

Well, we're going to do this method by method. I'll show you the original Artemis ComponentManager method and then I show you dustArtemis one.

Firstmost, the general structure of the class remains the same, it only has two fields:

Bag<Bag<Component>> componentsByType, which is a Bag of component Bags. Entities have an ID, and you can retrieve an entitity's component by doing componentsByType.get(componentIndex).get(entity.id). That is, component bags are indexed by the component's indices, and in those bags, components are set via the owner Entity ID.

Does this sounds like a huge memory waste to you? It is!

Say that you have one entity in your game that represents the camera, so it has a single instance of a Camera component, Camera components have an index of 5. You only have one camera in your game, and for some reason, the ID of the camera entity is 8002.

This happens: There exists a Bag<Component> in componentByType Bag, at index 5 (Camera's component index), and inside of it, there are all the Camera components in the world. Now, since you have only one Camera component and that one is attached to an Entity with the ID 8002, you'll have a Bag<Component> with enough space to hold at least 8003 components, so the ComponentManager can set the Camera component to the 8002 index.

I haven't solved this but I'm pretty sure I could work something out by adding HPPC (High Performance Primitive Collections), and using an int -> Object hashmap as a pseudo sparse array. It won't be as fast as direct array indexing but this is quite wasteful, so eff it.

The good thing is that Entity IDs are recycled, so you can be quite sure you'll only have arrays as big as you have entities alive in your World instance.

To the methods!

Disclaimer: I'll copy the examples in K&R for space reasons, actual code in the repository is in beautiful allman style Posted Image
  • initIfAbsent( cmpIndex )
final int prevCap = componentsByType.capacity();
// If type bag can't hold this component type.
if ( cmpIndex >= prevCap ){
componentsByType.ensureCapacity( cmpIndex );
// Init all the missing bags.
for ( int i = componentsByType.capacity(); i-- != prevCap; ){
componentsByType.setUnsafe( i, new Bag<>( Component.class, 4 ) );}
return componentsByType.getUnsafe( cmpIndex );
And That's All You Get!

Yup, that's because the editor sucks donkey balls and decided to erase 80% of the entry when I saved it as a draft, what you saw is what the editor decided to save, the rest is gone. Ironically, when I saved this last part as a draft it worked fine. See you next time I'm in the mood of wasting 2 hours of my life again...

EDIT: Great, it also ignores spacing after the code box so the "And thats all you get!" title is too near the box above it.

New Changes in dustArtemis

Posted by , 26 June 2014 - - - - - - · 1,270 views
dustArtemis, artemis, ecs, java and 3 more...
New Changes in dustArtemis Hi! In this entry I'm going to review the latest changes in dustArtemis and some thoughts on potentially big performance issue.


dustArtemis is a fork of Artemis Entity System, which is a BSD-licenced small Java framework for setting up Entities, Components and Systems.

All the changes!
Attached Image

Not terribly interesting actually Posted Image Probably the only commit worth mentioning is the first one.

Are you really sure you want to process this system?

Vanilla Artemis had a kinda silly situation. For each world "tick", only the active systems are processed, more or less like this:
for ( System system : systems ) {
    if ( system.isActive() ) {
Disclaimer: Using K&R for space reasons, not because I like it Posted Image

Now inside process method, this silly thing happened:
if ( checkProcessing() ) {
Immediately inside the system, another check happened. Basically, there was two "levels" in which the system could be active. First one was defined by a simple "active" flag (actually, it was called "passive" but I digress...) that just told to the World instance, "Hey dud, process me!".

Now this second check wasn't defined by a simple flag but by an overriden method. So if you inherited from EntitySystem, you had to provide your own checkProcessing method that just returned true on 95% of the cases.

I understand it had a purpose. In the IntervalEntitySystem, the "active" flag was just what it sounded like, but the checkProcessing method was the one that checked if enough time had passed for the system to actually do something.

It seemed like a kinda shoehorned solution to a specific problem, I just decided to get rid of checkProcessing method. Moreover, that specific problem is already taken care of by Artemis, just use the begin method.

You're just going through a phase

EntitySystem class provides a few hooks for additional processing beyond the usual "for all entities: do something". The process method actually looks like this:
public final void process (){
	processEntities( actives );
Default begin and end methods do nothing, you're free to override them. So, I just added a new boolean flag to IntervalEntitySystem, and made the begin method do the time interval calculation to see if it was time for the system to process the entities. I just needed to add "if isTime: process entities" to the processEntities method.

So, about that performance problem...

That was quite long for a 3 line change in the codebase right? Well, there is something a bit more interesting, entity removal and modification.

Adding, removing and changing entities entails the following procedures:
  • Notify World instance about the change.
  • Notify all Systems about the change.
  • Actually add/remove the Entity in a system's list of entities.
The second step involves a check method call in all system. It verifies if the Entity has all the required components for the System to be interested in it. While the check itself is kinda lengthy, its quite fast.

Adding entities is quite fast too:
private final void insertToSystem ( final Entity e ){
	actives.add( e );
	e.systemBits.set( systemIndex );
	inserted( e );
Add to the list, set a bit in entity's BitSet, then call inserted method. In the worst case, actives backing array gets resized, it will only happen a few times, mostly at level startup.

Now removal, that's the ugly one:
private final void removeFromSystem ( final Entity e ){
	actives.remove( e );
	e.systemBits.clear( systemIndex );
	removed( e );
That actives.remove( e ) call? Fucking. Linear. Search.

This means, if for some reason (say, remove a tag component), you change a bunch of entities so a bunch of systems won't be interested in those entities anymore, they'll get removed in each of the sytems's actives arrays by linearly searching for each entity you want to remove.

It works okay for at most a couple hundred of entity changes if you're using a good CPU. Now, if you want to change a couple thousands, it won't work.

Test case, I added 200k entities and changed a single component for all of them, it took eight seconds to remove them all from a single system in my Intel i5 2500. And you thought that 100ms spike was bad enough!

Gettin' solutions

Being reasonable, it won't be frequent to add 200k entities and change all of them in a single go, but you will have a couple dozens systems and you can easily see how the cost would add up. Suddenly, you have to think carefully about removing a component from an entity, nevermind if you have to change lots of entities.

The idea behind ECS is that these changes should be possible, flexibility should be king, so there has to be a way for this process to be more efficient. The essence of actives Bag is that its an unordered array, iterating over actives is efficient.

Trying to maintain the actives Bag, I thought about a few additional structures that could solve the problem, or amortize it a bit:
  • System Knows Best
Simply put a HashMap in System, and map each Entity instance to an index. Insertion and removal would become this:
// Remove entity from the index map.
int i = indexMap.remove(e);
// Remove entity by index.
actives.remove( i );

// If there are more Entities.
if ( !actives.isEmpty() ){
	// Update moved entity index.
	Entity tmp = actives.get( i );
	indexMap.put ( tmp, i );
// Clear system bit and call removed event.
e.systemBits.clear( systemIndex );
removed( e );
Bag retains no ordering, it implements removal by simply replacing removed position with the last item in the array. So, if you remove an entity from the middle of the Bag, some other entity will have its index changed by the one you just used to remove the previous entity.

The pros of this is: Friggin fast removal. 200k removals? IIRC, time went down to 200ms.

The cons of this: More than duplicated memory usage. HashMap uses Entry objects for the stuff it stores, so you'd have around 30 additional bytes per active entity on each system.

You will pay one hash computation per addition and two hash computations per removal. For all added/removed entites for all systems. Always.
  • Entity Knows Best
There are two versions of this one. First one that comes to mind would be put the HashMap on the Entity instead, all Entities would know their indices on all the systems they're active on. Problem is, memory impact would be worse, if you have 100 systems and 100k entities, instead of having 100 HashMaps you'd have 100k HashMaps. Which is bad.
Second version still involves linear search but to a much lesser degree:

Each Entity would have a Bag of a small [system, index] tuple for each system they're active on. So, when a system removes an Entity, it would work like this:
// Remove SystemIndex pair which has this system.
SystemIndexPair siPair = e.systems.remove((pair) -> pair.system == this );
// Retrieve index.
int i = siPair.index;
// Remove entity by index.
actives.remove( i );

// If there are more Entities.
if ( !actives.isEmpty() ){
	// Update moved entity index.
	Entity tmp = actives.get( i );
	SystemIndexPair tmpPair = tmp.systems.remove((pair) -> pair.system == this );
	tmpPair.index = i;
// Clear system bit and call removed event.
e.systemBits.clear( systemIndex );
removed( e );
Pros: Two very small linear searches per removal (as big as the amount of systems the entity is active on). No hash lookup for addition like the previous solution, just a simple Bag.add call.

Cons: It might be as costly as a HashMap in memory terms (new SystemIndexPair per entity active per system).


It seems to me that these solutions aren't an overall win for all cases, more like a bunch of tradeoffs:

It is possible that Entity removal/addition/modification as it is will work best for a small amount of entities, then the SystemIndexPair solution would work best for a bigger set of entities, and then HashMap solution would work best for an even bigger set of entities.

Well, that's enough writing for today, see you in the next entry!

October 2016 »

23 24 2526272829

Recent Comments