Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 1 more developer from Canada and 12 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


Assembling entities in an ECS? Scripting languages?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 TheChubu   Crossbones+   -  Reputation: 7026

Like
0Likes
Like

Posted 06 April 2014 - 02:37 AM

Hi!

 

I'm working on a little 3D project of mine using an Entity Component System framework (specifically, a modified version of Artemis). It's working fairly well for me, but I'm having some issues with regular "entity assembling".

 

As a note, its a very "textbook" implementation of an ECS. Entities are just IDs with Components, Components have just data, and Systems iterate over all Entities that have the components they're interested in. Everything is held by a "World" instance, there is where you hook up your Systems.

 

So, Entities are assembled out of Components, this example below is for creating a simple quad that has texture:

Entity e = world.createEntity()

.addComponent( new Orientation() )
.addComponent( new WorldTransform() )
.addComponent( new MVPTransform() )
.addComponent( new RenderState() );
.addComponent( quadGeometry )
.addComponent( new AABSquare( OpsMesh.getMinAABS( quadGeometry.vertexBuffer.view ) ) )
.addComponent( resourceManager.get( Texture.class, "testTexture.dds" ) )
.addComponent( Material.TEXTURED_NO_SHADING )
.addComponent( Tags.Static.VALUE )
.addComponent( Tags.Textured.VALUE );
e.addToWorld();

After those lines, the entity is already in the world and it will be processed by the systems that are interested in it.

 
Now, this is a very, very simple entity. Its literally a quad floating in the screen, and you can already see that the initialization is quite cumbersome. For example, if the mesh was rendered using indexed drawing, I'd need to add a GeometryIndices component, and change the Tags.Textured.VALUE for Tags.TexturedIndexed.VALUE. Quite annoying.
 
I was going to ask, how would I manage all of this in a more pleasant way? I already have an "EntityFactory" static that has methods like "genSpatial()" which returns an Entity with some Components used in spatials. But it seems like a band-aid rather than a proper solution, and it gets quite big after a few variants of spatials are added (dynamic spatial or static spatial? its textured or not? and so on...).
 
I thought about maybe creating some sort of "Bundle" of components? I'm not sure it would really help.
 
After thinking about it, it occurred to me that maybe this kind of work is done on a higher level language, so adding a scripting language hook and deal with this there might be a better idea. I have no experience with scripting languages though, so I'm not sure how to get that working (I'm using Java, I have various choices, Python 2.7, Lua, Javascript, etc. I'm only barely familiar with Python though).
 
This also ties in with something I haven't got around to do yet: Scene management. This would involve having some way to define a scene, with all its Entities, and what are those Entities composed from, outside the application, so I can load/unload them. For starting with that though, I'd prefer to have some clean way to assemble Entities first...

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


Sponsor:

#2 Juliean   GDNet+   -  Reputation: 4237

Like
1Likes
Like

Posted 06 April 2014 - 06:13 AM

One way to reduce the number of components is to rely more on external classes. It seems to me that you are doing an ungodly amount of work in those components - assembling materials & mesh geometry, textures, render state management etc... . Why is all of this a seperate component? I used to implemented it so that I had a model class, which combined geometry/mesh with material - then there was only a "Model"-component, holding exactly one model-instances, being responsible for rendering it. Assembly of this, like what geometry it has, what textures etc... can happen at another point in the program, for example in a config file while loading the level. This already eliminates most of your components. The same could probably be applied to the Orientation/WorldTransform/MVPTransform.

 

To sum up, I wouldn't recommend splitting components too strong, you will end up with a quadrillion of them eventually (think about e.g. a particle system, there would be 100 components due to all the different properties). Create one component per aspect of the game (rendering, physics, ...) and allow customization based on some parameters of this component. Also don't try to fit in everything into your ECS - entities and components are a great way of tying stuff together to form a game, but remember you can, and IMHO should, always rely on external classes for encapsulation and building reliable background systems, instead of cluttering your systems logic with lowlevel-rendering stuff, to give an example.



#3 phil_t   Crossbones+   -  Reputation: 5811

Like
1Likes
Like

Posted 06 April 2014 - 11:59 AM

I agree with Juliean. You've factored your components down too small, and it seems you have duplicate data. I mean, Orientation is part of WorldTransform, and WorldTransform is part of MVPTransform. "View" and "Projection" actually shouldn't exist in a component anywhere, other than one attached to a camera entity. (What if an entity is being rendered from multiple view points, such as if you were rendering a shadow map?). RenderState also doesn't sound like something that should exist in a component, although I'm not sure what it is.

 

Most of the rest of your components could be combined into a "Model" component that has geometry, material, texture references, etc... And since those tend to be the same, the Model component might just point to a Model class that defines all those things (separate from the ECS).

 

In short, you could probably get away with just two components for the scenario you listed: Transform and Model.



#4 TheChubu   Crossbones+   -  Reputation: 7026

Like
0Likes
Like

Posted 06 April 2014 - 08:11 PM


enderState also doesn't sound like something that should exist in a component, although I'm not sure what it is.

Oops! That one isn't used anymore biggrin.png I had it to differentiate Entities that had been submitted for rendering (it had two booleans, uploaded and visible) but I take care of that differently now.

 

 

It seems I should have explained what those components do (or rather, what kind of data they hold). I'll try to explain:

 


The same could probably be applied to the Orientation/WorldTransform/MVPTransform.


 

Orientation is part of WorldTransform, and WorldTransform is part of MVPTransform. "View" and "Projection" actually shouldn't exist in a component anywhere, other than one attached to a camera entity.

 

This is a consequence of having no idea what I'm d... errr, my "design decisions" biggrin.png

 

Orientation holds 3 floats, yaw, pitch and roll. Those get computed into a rotation matrix and applied to the WorldTransform each frame for Tags.Dynamic objects, only once for Tags.Static ones (or at least it did until I looked at it a minute ago and noticed I forgot to set the 'dirty' flag on the transforms, new bug fix, thanks for that! ).

 

MVPTransform just holds the result of doing the model * view * projection matrix multiplications for each entity that will be drawn. It's what it gets passed to the renderer and uploaded as an uniform (cbuffer in D3D parlance I believe).

 

I still don't have any sort of shading now (literally the only two materials there is Material.TEXTURED_NO_SHADING and Material.COLORED_NO_SHADING). It's all in its infancy yet. The "Material" thingy will get revamped eventually with material parameters stored in .yaml files assembling a map of materials at startup.

 

Now, here is the TL;DR; part in which I'll explain why I didn't went with the "Model/Spatial/etc" component approach, hell, I'll even put titles on it for teh lulz, (I'm starting to think I miss writing in my journal)

 

Rendering Steps

 

The renderer has a two step process to draw things, I'll grab the most annoying one, a textured mesh with indexed drawing:

 

First step: Upload the necessary data.

 

  • id of the Entity (this will identify the instance)
  • Geometry (vertices, tex coords, normals)
  • GeometryIndices (self explanatory)
  • Texture (texture data, width, height, mipmaps)
  • Material (maps to a shader)
  • MVP matrix (uploaded to the shader)

 

The renderer uses a "GLResourceManager" to compose a VAO out of the data if it hasn't been uploaded before (otherwise it just grabs a VAO from the Geometry-VAO map) and assembles a "GLRenderable" mapped to that entity id. If there isn't a RenderBatch for that material, one gets created (not queued for rendering just yet). All we have now is a GLRenderable and the data uploaded in the GPU.

 

Second step: Add the Entity to the render queue, a rough culling system does the job:

 

  • Out of all entities that are in the world, the cull system assembles a visible set.
  • For all the visible set, cull system calls "renderer.addToQueue(entityId)"

With that Id, the renderer grabs a GLRenderable from the map, and adds it to a RenderBatch for that material. Renderer adds the RenderBatch to the queue for rendering if it isn't already there.

 

For each spatial rendering type (Textured, Colored, IndexedTextured, IndexedColored) there is a SpatialSubmitterSystem that grabs the proper components and sends them to the renderer.

 

Drawbacks

 

With a single "Model" component, I'd have to put a bunch of logic in there to say "if model.usesTexture() upload as texture" "if model.hasIndices() upload as indexed" and so on. Given that its all pretty much barebones, it will probably get much more complex after I start adding either more ways of rendering (say, instancing) or more complex state settings in the renderer.

 

Maybe you can help me with this? If I find a clean way to add renderables to the renderer, it wouldn't be complex to put all the components in a single "Spatial" component. I've seen this process in some open source projects and they just put a 'switch' in the renderer with a 'case' for each variant, around 80 to 100 lines of those and other 'ifs', which is simply fugly IMO.

 

With this method the renderer is quite simpler, here is the upload method for textured indexed model:

public void upload ( int id, Geometry geo, GeometryIndices ind, Texture tex, Material mat, CmpMat4f mvp )
{
// Compose renderable with resources.
GLRenderable renderable = newRenderable( glResources.getFor( geo, ind ), mat, mvp, glResources.getFor( tex ) );
// Set render call with texture logic and indexed draw call.
renderable.renderCall = (r) ->
{
tracker.active( texUnits[0] );
tracker.bind( r.texture );
RenderCalls.indices.accept( r );
};
// Associate entity id with the renderable.
renderMap.put( id, renderable );
// Create batch if material hasn't been passed before.
createBatchIfAbsent( mat );
}
// And then to add them to the queue.
public void addToQueue ( int id )
{
// Get renderable associated with the entity.
GLRenderable glr = renderMap.get( id );
// Get batch for renderable material.
RenderBatch batch = materialBatches[glr.material.ordinal()];
// If no renderable has been added to the batch before, add batch to render batch queue.
if ( batch.renderQueue.isEmpty() ) renderBatches.add( batch );
// Add renderable to batch queue.
batch.renderQueue.add( glr );
}
 
The bad side is what you saw, I need plenty of components to differentiate each case...

Edited by TheChubu, 06 April 2014 - 08:14 PM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#5 phil_t   Crossbones+   -  Reputation: 5811

Like
1Likes
Like

Posted 06 April 2014 - 09:45 PM


Orientation holds 3 floats, yaw, pitch and roll. Those get computed into a rotation matrix and applied to the WorldTransform each frame for Tags.Dynamic objects, only once for Tags.Static ones (or at least it did until I looked at it a minute ago and noticed I forgot to set the 'dirty' flag on the transforms, new bug fix, thanks for that! ).

 

Hmm... if it's applied to WorldTransform each frame, where's the original World transform of the object? Or does WorldTransform contain both (like "main" and "final"? What code needs WorldTransform? If it's just the Rendering system, then there isn't really any reason to store it in a component, is there? Just calculate it as needed and submit it to the render queue.

 


MVPTransform just holds the result of doing the model * view * projection matrix multiplications for each entity that will be drawn. It's what it gets passed to the renderer and uploaded as an uniform (cbuffer in D3D parlance I believe).

 

But why does this need to be stored at all? This will be broken if you render the entity from multiple points of view during a frame (e.g. for shadow map, or split-screen, or whatever).

 

I would just have orientation, position and scale stored in a Component (either 3 components, or all lumped into one, whatever works best for your game), then calculate the World matrix for this when you submit it to the render queue. Then the code that processes the render queue will use the appropriate View and Projection matrices for the viewport/camera you're currently rendering to.

 

 

 

Be careful using Components to store state that is specific to your systems. Now you've got a two-way dependency - Systems of course depend on Components, but Components also know about on Systems, in a sense (since they have data that is only valid when calculated by a specific system). I wouldn't say this is horrible - you have to store that state somewhere - and it's convenient to use a component for that. Maybe it's ok if you have some Components that *only* store system state, and only the systems themselves are aware of these Components (and the systems are responsible for adding/removing those components when the entity is added/removed from the system). Myself, I have a mechanism where I can store system-specific per-entity state in the system itself, so it's not visible outside the system that needs it.



#6 phil_t   Crossbones+   -  Reputation: 5811

Like
0Likes
Like

Posted 06 April 2014 - 09:58 PM


With a single "Model" component, I'd have to put a bunch of logic in there to say "if model.usesTexture() upload as texture" "if model.hasIndices() upload as indexed" and so on. Given that its all pretty much barebones, it will probably get much more complex after I start adding either more ways of rendering (say, instancing) or more complex state settings in the renderer.

 

Wait, doesn't your renderer already need logic like that in order to draw different things in different ways (e.g. use a different shader permutation to draw something with a texture vs without)? As for instancing, maybe that's an optimization decision a renderer should make rather than pushing it out to whomever assembled the components?



#7 TheChubu   Crossbones+   -  Reputation: 7026

Like
0Likes
Like

Posted 07 April 2014 - 02:22 AM

Hmm... if it's applied to WorldTransform each frame, where's the original World transform of the object?

First I compute a rotation matrix per axis (yaw, pitch, roll). Then compose a single matrix out of those (zrot * yrot * xrot) then writhe that 3x3 matrix on the 3x3 top right part of the WorldTransform. Positioning is updated directly on the WorldTransform.

 

So thats the "original" world transform, in another system, the MVPTransform is computed with the projectionView matrix and this WorldTransform, then sent to the renderer.

 

I agree I could put some of these together (I doubt i'll have an Entity with a transform but no orientation for example

 

What code needs WorldTransform? If it's just the Rendering system, then there isn't really any reason to store it in a component, is there?

Well, the movement system needs to update it to set the position. It also it can be used to scale the mesh.

 

 

 


Wait, doesn't your renderer already need logic like that in order to draw different things in different ways (e.g. use a different shader permutation to draw something with a texture vs without)?

Not beyond the code I already posted. I have 4 overloaded "upload" methods, each one of them get used by a particular Submitter system that uploads entities that have particular components (ie, TexturedIndexedSubmitter looks for Texture and GeometryIndices), they also get differentiated by a Tag component (Tag.Textured, Tag.TexturedIndexed) so say, TexturedSubmitter doesn't uploads the same entity as TexturedIndexedSubmitter since the former could use entities from the later. There are mappings from Material to shader program, and they get loaded if needed when the RenderBatch that uses it gets initialized.

 

The 4 types of submitter systems call a specific variant of "upload()" method each in the renderer.

 

As I said, I only have two shader programs (one for vertex colors, other for textures) so far, this will change in the future when I add lightning.The renderer is pretty "dumb" as it is, it depends on the systems to "manipulate" it correctly, its done that way ("exploiting" the ECS) to avoid long chains of switchs/ifs to handle each particular case inside the renderer, which aren't nice to code nor maintain.

 

I agree that it will fall apart when I start to add more complex features (lighting requires more matrices, shadow mapping too, multitexturing needs various texture units and additional sampler logic,etc). I am all ears for a better way to handle resource uploading and each particular case of drawing modes. What I have works but I don't want to have 10 "upload()" methods to handle each particular rendering behavior that might need different data beyond Geometry and a MVP matrix.


Edited by TheChubu, 07 April 2014 - 02:29 AM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#8 JordanBonser   Members   -  Reputation: 632

Like
0Likes
Like

Posted 07 April 2014 - 09:13 AM

I don't think you will need a long list of if statements doing it the other way. In my ECS I have a Mesh Component that wraps up an internal Mesh object which has all the information about it's materials, sub meshes, and render method. 

 

The ECS should in my opinion only go as far as dealing with entities in the game scenario, the underlying modules for handling things like Rendering, Physics and Sounds should still use a classic OOP approach as it suits it best.

 

so you can have a Mesh Object in the Rendering part of your code that will know about how it is supposed to get rendered, using derivation of a "Render Method" class. The Renderer can then just loop through all Instances of a Mesh and Tell it to render.

 

The Mesh Component can then change any values of the Mesh Object that it is supposed to by just exposing certain methods of the internal Mesh Class.

 

The way I see it is you should be able to rip out the Entity Component System and replace it with a normal hierarchical entity system and you should still be able to do all the key features ( i.e. Physics, Rendering, Audio ) without having to create anything but new entity classes. 



#9 TheChubu   Crossbones+   -  Reputation: 7026

Like
0Likes
Like

Posted 11 April 2014 - 08:57 AM

Sorry for the late reply.

 

Wouldn't that defeat the purpose of an ECS? I mean, having logic leaking into the components?

 

In my system currently the actual "renderables", ie, where the renderer stores the info it needs to render something, don't escape from the renderer. The renderer ties in internally the incoming id with a renderable, it keeps it pretty tidy.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#10 Juliean   GDNet+   -  Reputation: 4237

Like
1Likes
Like

Posted 11 April 2014 - 09:29 AM


Wouldn't that defeat the purpose of an ECS? I mean, having logic leaking into the components?

 

The way of having components with no logic and instead have the logic in systems is only one possible implementation aspect of ECS. While I recall it a newer one, it is also possible to have an implementation where the components have some/all of the logic, see this gamasutra article. If you want to stay with the logic-less way of components, you can still have the components have multiple attributes/variables/parameters eigther public exposed or via setters/getters. Eigther you can expose the renderable class directly, or you just have attributes that map to its properties. e.g., instead of having a Material.TEXTURED_NO_SHADING component, you would have your renderable-component an attribute called "material" that can be set to eigther of those possible material-values.



#11 phil_t   Crossbones+   -  Reputation: 5811

Like
1Likes
Like

Posted 11 April 2014 - 11:15 AM


Wouldn't that defeat the purpose of an ECS? I mean, having logic leaking into the components?

 

There's a number of benefits of removing logic from components: it makes them more re-usable, it prevents them from having any dependencies they have on each other.

 

It also allows you to have external code that reasons about them as a group. For instance, if your all your rendering code does is call a Render method on every entity's Renderable component, then it has no way of intelligently sorting things by material, efficiently culling things, or changing the underlying rendering mechanisms (fwd rendering vs deferred rendering, etc...).

 

In this case JordanBonser's case isn't quite like that. There is an additional abstraction, since the component just contains a Mesh object, and that is the one with logic. But having a Render method on Mesh still has the same issue as above. In my opinion it would be better just to have the Mesh object expose a material, vertex buffers, etc..., and then have external rendering code that can do with those as it pleases.



#12 TheChubu   Crossbones+   -  Reputation: 7026

Like
0Likes
Like

Posted 12 April 2014 - 03:25 PM

So what you're bassically saying is to create a single Mesh component, and put there the data in Geometry, GeometryIndices, WorldTransform, Material and probably Texture components.

 

But then, wouldn't I have something like:

 

renderable = new Renderable()

 

if (mesh.hasTexture)

renderable.texture = createTexture();

if (mesh.hasIndices)

renderable.indices = createIndices();

 

And so on for each attribute that may or may not be there?

 

The thing about fine graining components is that the Submitters only upload the kind of entity they're interested in. So its a single upload() call with the data and thats it. The Submitters call different kinds of upload() methods accordingly to the entities they're interested in. So TexturedSubmitter uses the upload() that has texture in it.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS