Loading and storing game assets in a Entity Component system

Started by
6 comments, last by CC Ricers 11 years ago

I am trying my hand at my first entity-component system after a newfound love for the concept reading the recent GameDev article (because the other articles I've read on the subject didn't do a good enough job to make things clear as an intro to it). It's pretty clear to me right now how an aggregate of components can define an object's behavior. But as they are mostly data containers, I assume it should be a system's responsibility to load the data from the file system, correct? Should it be the same system that acts upon the component on every update?

For instance, if I have a Sprite component that holds a texture for the sprite, and I want to load this when I need the sprite, I picture having a GraphicsSystem with a Load function to prepare the appropriate Entities it needs to work on.


Some pseudocode would go as such:


// Main game code

GraphicsSystem graphics;

LoadGameContents()
{
    GameObject objectWithSprite = new GameObject();

    objectWithSprite.AddComponent("SpriteComponent");
    objectWithSprite.Component("SpriteComponent").textureName = "texture.png";

    graphics.LoadAndRegister(objectWithSprite);
}

// In SpriteComponent

String textureName;
Texture spriteTexture;

// In GraphicsSystem

List<GameObject> graphicsObjects;

LoadAndRegister(GameObject object)
{
    if (object.HasComponent("SpriteComponent")
    {
        Texture spriteTexture = LoadTextureFromDisk(object.Component("SpriteComponent").textureName);
        object.Component("SpriteComponent").SetTexture(spriteTexture);
        graphicsObjects.Add(object);
    }
}

So in this example the graphicsSystem takes care of the loading, which would be called only when the game or a game state is initialized. The same graphicsSystem would also draw the sprites from its list of GameObjects to the screen in a different function. The texture data is stored in the SpriteComponent.

Is this a common way to store and load assets in a game using Entity Components? I've just seen the Entity Systems being talked about as they are used in the update loop but I was wondering about their use during initialization.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

Advertisement

I certainly wouldn't have the system load it's components. For one, you would now tie together two aspects of the entity/component system: Now your system would not only act on the data of the entities, but also load them, which does not only violates Single Responsibility Principle, but also couples together entity and system in a way I can't really describe - you are basically giving the system one certain specifiy entity and modify it. Thats not how systems (should) work. Systems should act upon entities based on their components, but don't care how this components actually gets put together. At the time where you handle a system one specifiy unit, you're somewhat denying its purpose, in my opinion.

Also, I'm not sure how others handle this, but I wouldn't suggest such a generic thing as a graphics system. In my approach, I e.g. currently have an deferred render system, a deferred lighting system, a bounding volume renderer, and an octree visualisation. I consider this a good design because I can easily switch or disable parts of the systems, without directly affecting the other ones. This brings me to my next point that systems might work on the same set of data from a game object - which one should then be assigned to load them? Have eigther one of them load all of the data? Or have the system load that data that it needs the most? Maybe have both load it, just in case? What if you wanted to disable or replace this system? It would work but probably cause a lot of dublicate code.

My main point is that it shouldn't be in the responsibility of the system to actually load things. Loading should be a seperate task, you'd have some sort of texture storage class, with some loading functionality. Then again, does your sprite component really need a pointer to the texture? Isn't the string enough to identify it? Shouldn't you have access to the previously loaded/stored textures anywhay when finally rendering this sprite? I'd suggest you load the textures offline, practically independed of the whole systems. You might emit an event to request a texture load if a sprite happens to change its texture due to a system. Then, when you actually render the sprites, you can pull the texture out of the storage and use it.

Thats at least how I do it/would do, hope this gives you a few ideas.

In my case, the component just has an identifier for the texture. The system, or systems, that are in charge of rendering the entity just look up the actual texture using the identifier in a global texture manager kind of thing. (the textures could be lazy-loaded, or pre-loaded).

So are you suggesting that Entities be in charge of loading their own content for its Components? Or would you move it to a System that is exclusively in charge of loading a specific kind of content in order to conform with the SRP? I thought maybe you could have special Systems that only do loading tasks.

Then again, does your sprite component really need a pointer to the texture? Isn't the string enough to identify it? Shouldn't you have access to the previously loaded/stored textures anywhay when finally rendering this sprite?

The Sprite component has no pointer (in actuality I am using C# so they are reference types and thus reference-counted), and this Texture object is how you would access the texture data, and it's only present in the Sprite component. I guess what you are saying is that the textures should be pooled someplace else, like in the texture storage class.

I use lazy loading to check whether a vertex buffer exists or is dirty, before rendering a mesh, so looks like the same would be applied for textures.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

So are you suggesting that Entities be in charge of loading their own content for its Components? Or would you move it to a System that is exclusively in charge of loading a specific kind of content in order to conform with the SRP? I thought maybe you could have special Systems that only do loading tasks.

I would much rather suggest making loading completely independed of all the entities, systems, and components. I interpret the notion that components are "data containers" not that they contain actual data in form of textures, sprite objects, vertex objects, ... but mere attributes. Be it a position, a textures file name, or if you want a "strong" connection to some certain object like a program-generated texture you might use custom identifies like integer IDs or some string as name - but no object itself. Think about components as attribute containers - no need to load anything, at least not on creation. I see three alternatives:

- Have the render system load the resource when drawing the sprite, like I said before. Since you don't want the same texture loaded multiple times, use a storage class like suggested.

- Make a middle-class between systems and the actual rendering, where you would simply pass position and texture name, along with all other attributes you might want to have. Think about it as sort of an render queue - main task would be loading textures, but there could be other features implemented.

- Forget lazy loading and load everything when it is getting accessed. Load your existing entities textures on level loadup. Then, whenever one system changes a texture, it would send a event/message telling possibly one system that does indeed perform the loading - then again, it might not even be a system at all, but an extern class, like suggested, eigther notified via event or directly, through a pointer/reference in the system.

In each of these implementation you would actually have the sprite component only contain a string and pull out the filename when you render it. If you were desperate for performance, you could introduce some sort of cache-component.

The Sprite component has no pointer (in actuality I am using C# so they are reference types and thus reference-counted), and this Texture object is how you would access the texture data, and it's only present in the Sprite component. I guess what you are saying is that the textures should be pooled someplace else, like in the texture storage class.

Sorry, I got that wrong, didn't programm C# myself. But, it seems you still got what I meant...

I would much rather suggest making loading completely independent of all the entities, systems, and components. I interpret the notion that components are "data containers" not that they contain actual data in form of textures, sprite objects, vertex objects, ... but mere attributes. Be it a position, a textures file name, or if you want a "strong" connection to some certain object like a program-generated texture you might use custom identifies like integer IDs or some string as name - but no object itself. Think about components as attribute containers - no need to load anything, at least not on creation.



Okay, that's a new way to look at it. I considered components to hold literally any type of data- though it seems they are more used as "lightweight" containers. Fortunately in XNA part of the storage class problem is solved. You can call Load() on the same resource multiple times from the same instance of the content manger without loading from disk again, because subsequent calls will just get its reference from the cache. I tend not to do that anyways.

My render system- which was before I started thinking about components and entities, seems to be broken up into similar parts. I would have one renderer responsible for making the G-Buffer, another for the lighting pass, another for post effects, and so on. They all store their own effects relevant for each rendering step. So in adapting that kind of setup, I could just call these rendering systems in the proper order acting upon the needed Entities to draw. (ah now I see why some code samples called their quad class a QuadRenderComponent - if it comes to using many rendering functions, those that need to draw screen-space quads may look for this component)

New game in progress: Project SeedWorld

My development blog: Electronic Meteor


Okay, that's a new way to look at it. I considered components to hold literally any type of data- though it seems they are more used as "lightweight" containers.

Of course there are many different ways how to organize your components. Older entitiy/component systems didn't even have systems, but had logic handled by components and stored data inside the entity. I just appears to be the most flexible for me if the components are used to represent attributes - since we already have systems to handle them, there is no need to store something else in the components.


My render system- which was before I started thinking about components and entities, seems to be broken up into similar parts. I would have one renderer responsible for making the G-Buffer, another for the lighting pass, another for post effects, and so on. They all store their own effects relevant for each rendering step. So in adapting that kind of setup, I could just call these rendering systems in the proper order acting upon the needed Entities to draw. (ah now I see why some code samples called their quad class a QuadRenderComponent - if it comes to using many rendering functions, those that need to draw screen-space quads may look for this component)

That seems to be what I've been talking about - this offers many possibilities, for example I represent lights for my light rendering system as entities, where the light type is determined by its components - position, direction (those two are shared between "normal" entities, only light specific attributes are their own components like:) diffuse, specular... and for my level editor as well as for debug mode I'd attach a bounding volume component to the light. My debug bounding renderer would then choose from a set of available volume meshes to render the fitting using this components data like type and size. The possibilities are (almost) endless. Obviously this would be somewhat a violation of seperation of concerns (rendering/lighting + collision), but since all we store is data in form of attributes, each system will only know about what it actually needs to.

Systems might even interpret data differently - talking about the light with the position and bounding data again. Now if we made the bounding component store an actual bounding object, we'd eigther have doublicate data (since now the light has both a position and the position again in the bounding volume), or if we left out the position component we'd need to extract the position from the bounding volume, therefore the light rendering system would need to know what a bounding volume is. Not that it hurts much, but I wouldn't consider this good design. Just so you see even more clear what benefits having this modular component approach has.

So extending this into the component approach, here are my current systems and how they may act upon different objects. All except for Quad and Post-Process also use a Transform component.

Basic Forward Renderer -> uses Light, Camera, and Geometry components

Custom Forward Renderer (provide your own shader) -> Light, Camera, and Geometry

Debug Renderer -> Camera, Bounding Volume

---

G-Buffer Renderer -> Camera and Geometry

Depth Map Renderer -> Camera and Geometry

Light Pass Renderer -> Light, Camera, and Quad

Final Pass Renderer -> Quad

---

Post-Process Renderer -> Quad

A proper mesh object would have Transform and Geometry.

Camera objects have a Camera component of course, it would contain view and projection matrices. A Transform component for the World matrix, maybe I would consider that. But then, cameras would not use the scaling part.

A screen-space Quad is just a Quad, it would not need a Transform, as the renderer would do the job of positioning it to the screen.

Light objects are only a Light component for now. Reasoning for not having Transform is most lights don't need a lot of data to represent them. Point lights have a size and position but no rotation. Directional lights have neither position or scale.

Geometry contains all the information about textures within the model (embedding them is an offline process). Same with Quad. I don't know if I would separate the textures, as I think that's making it too generic for now. I haven't started thinking about other kinds of rendering like blending transparent objects after the G-Buffer, restoring depth, etc.

I guess I'm talking out loud here. My architecture is actually set up a lot like this already, but the model and camera classes have gotten too big for my liking. It looks like my code would benefit from CES in organizing the data and making it easier to maintain.

New game in progress: Project SeedWorld

My development blog: Electronic Meteor

This topic is closed to new replies.

Advertisement