Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Oct 2009
Offline Last Active Apr 09 2016 08:44 PM

#5264982 Need help with issues I encountered while writing my ECS

Posted by on 05 December 2015 - 01:00 AM

Also, I was wondering how you could make use of contiguous memory when a system makes use of sets of different types of components as this seems to be the case for most systems, especially if you slice the components into smaller pieces?


In complex situations where a system requires multiple components to perform it's job, I tend to side on the caching approach.  Basically the system uses the notification callback when entities are added, changed, or destroyed to manage an internal list, a list I prefer to call node list.  How and when you elect to transfer component data to the node list is entirely dependent on how the system must update in accordance with your game loop.  You may need to use a command pattern to delay the update until the system's update loop or it may be safe to immediately update the node list directly.


A render system for example may require a Transform and Renderable.  As entities are added, changed, or destroyed the render system maintains a list of RenderNode instances that basically cache the transform and renderable data from the components.  The render system's update loop uses the RenderNode list rather than the components themselves as the basis for it's update loop iteration to maximize on cache friendliness.


Nothing says a system must maintain a single node list either.  In fact, it really should be a system detail that dictates how many node lists it must maintain in order to efficiently perform it's update phase.  Multiple lists are often used to avoid if/else branch statements inside the loop which can be costly on cache friendliness.

#5237623 Entity Component System architecture for a 4X game

Posted by on 29 June 2015 - 10:50 PM

For now, it works. The problem is that all entities belongs to a global entityManager and I can't find a way to split them between several players entity (think of "empires", "races" or whatever). I have a gameInput system that gives order ("build a factory on this planet", "move this vessel"). But I can't see a clean way to restrict those order to a set of entities that would represent a player side. Thus, I have no way of telling if an entity belongs to a given empire.


I though I could have several entityManager, each one handling an empire's entities. Or having an "empire" component that would hold a list of entities belonging to this empire. But none of these solution appeals me.


An entity manager generally doesn't exist more than once, at least not within a given simulation.  So unless we're splitting hairs over terminology here, I certainly would not advocate for such a complex setup with multiple EntityManagers because, if those entities need to interact with entities of another EntityManager; you'll have a bit of work to make that happen.


I would split your problem into two layers.


First, create an empire system that acts as a broker for all entities that exist in a particular empire.  When an empire gets created, this system knows about it.  When an entity is spawned that has an empire component that references a specific empire, this system associates that entity to the given empire.  Any communication among entities within an empire could be funneled through this system since it's goal is to act as a broker.


Next, I would have another system that sits on top of the empire system that acts as a mapper between a Player and an empire or league of empires.  This way if you decide to allow a player to control multiple empires, this system can coordinate that and handles the relationship to a specific player instance. 


If you keep your system contracts clean, you should be able to change how an empire works without it impacting it's relationship to the player and vice versa.  


A little bit of abstraction can go a long way to keeping code clean, easy to maintain, but also allows to future growth in game concepts.  It's just important not to take it too far to the extreme.

#5201858 Movement system low or high level in ecs design

Posted by on 04 January 2015 - 10:36 PM

The movement system must change the position component, and the render system request the position and renders the entity.

There are two approaches you can consider for replicating information to your render system and a lot of it depends on how coupled you want your code.


The first approach would be to decouple your entity system and rendering system entirely.  There is lots of benefits to this approach as it will allow you to easily replace your rendering engine with another but does come with it's own set of concerns.  To decouple the two entirely, you would use a command queue where your ECS systems emit commands as they perform various operations.  These commands are placed into the said queue and during the render system's update phase, it would parse these commands and perform the necessary rendering operations.  It's important that all pertinent information be included in the command to avoid the rendering system having a need to query the ECS at all.  


The second approach would be to live with the fact there will be some coupling.  In this case, you would have a specialized ECS system or game loop step that runs after you have completed your logical updates and would query various entities and replicate necessary information from the ECS into the render system's scene manager.  This system acts as a wrapper around calling Irrichlt's "render/update" call as it would perform the various scene updates and then render a single frame.


Both approaches have advantages and disadvantages and you can easily change between one or the other.  Pick one that makes the most sense for now and move on.  You can always come back later and change it and improve upon it as the need arises.

#5181446 Did I begin learning late?

Posted by on 18 September 2014 - 10:44 PM

Late?  Not even close.  In fact, I'd encourage you not to forget to be a kid either.  


Lots of times, avid programmers who start at a young age find themselves buried in learning all this information that is at our finger tips and we forget to be kids and have fun and do the stuff kids do.  One day, that "me" time won't be as prevalent as responsibilities take hold and you start a family and begin in the workforce.


So cherish your time wisely.  If you want to learn how to program, networking or whatever.  Do it for you and not anyone else.  Do it because you want to and not because you feel compelled to prove your self worth to someone else.  But as with anything, moderation and don't forget to have fun along that journey! 

#5178284 Correct Term For An Operating Systems Core API

Posted by on 05 September 2014 - 06:38 AM

A very general term might be Platform API.  If you've ever looked at libraries or commercial products that are developed for multiple systems such as Unix, Windows, consoles and so forth.... the wrapper classes that abstract the various systems is typically part of a Platform library that exposes these systems as a unified API to the remainder of the system.

#5135214 Implementing an Entity Component System in C++

Posted by on 27 February 2014 - 05:03 PM


Ok, so I wonder how I should implement the systems. Since there is no reason to instanciate them (it would make no sense), the methods should be static, don't you think? Or better, I could use a singleton to make sure there is only one instance of each system.


Honestly, introducing singletons or static anything in this the mix is a recipe for trouble.  You're only making the solution far more rigid and brittle at the same time which will be inflexible and hard to maintain long-term.  


Additionally, systems should be instantiated because they'll likely need to maintain internal state and not introducing singletons or static state implies you can easily run two copies of the system on perhaps different entity manager data sets or different game states and perform multiple simulations in parallel or sequentially without any trouble.


Still, I hesitate to implement it in my game for the reasons below:
I like my architecture to be clear, not to be a mix of several design patterns which are supposed to be "contestants" (e.g. Map would be a traditional class whereas Character would be an entity). It's quite confusing I think.
With the OOP approach, the "skeleton" of the game is well defined by the classes: you read the Character class and you know everything about it. Whereas with the ECS approach, a Character is divided in several files (1 entity, x components and x systems) and you don't know where it all gets together. However, I agree the code is more modulable with an ECS.
So I think for now I'll stick with the "old" OOP approach. I'm sure it can work flawlessly as a lot of games don't use an ECS and they work well.


A clear architecture has nothing to do with the design patterns which it uses.  In fact, an architecture tends to be cleaner when the right design pattern is chosen for the problem at hand rather than trying to shoehorn something that doesn't fit due to some bias or other factor.  Opting to use design pattern A for part of a problem and design pattern B for another portion with some design pattern that marries the two is actually very common place in programming and in my experience generally carries considerably more benefits than it does consequences.


I prefer to consider that both a Map and the Player are stand alone classes pertinent to the game engine, core classes if you will.  I then give the Player class an unsigned integer property that is basically the identifier to some game object in the game object system that represents the actual player.  The benefit here is that if the engine operates on the exposed API of the Player class, the engine doesn't care if its an entity or not, nor does it care about the components which make up the player.  With all that abstracted, you can change implementation details of the Player with minimal impact to the engine/game itself. 


And as you can see, such an approach does follow your idea of a Character class that knows everything about itself.  The only difference is that rather than the state of a Character being stored in that class specifically, the Character class queries various systems to get it's state based on the method call invoked.


One of the biggest reasons why these systems are great is the focus on data-oriented design.  You store data that you plan to operate on at the same time together in logical chunks, utilizing cache friendly operations, thus boosting performance.  Because you're grouping data oriented things together and decomposing state, you also gain the benefit that it becomes easier to piece features together and morph object state from A to B.  Of course, all this implies a different mindset to understand that you're updating game state in many many tiny stages throughout a single update tick for any given object.  


But nothing of the above says you cannot use ECS in conjunction with OOP.  Both are wonderful tools that can and do work well together if you follow the simple rules of programming with separation of concerns, single responsibility principals and data-oriented design goals.


#5127729 Need help with effects system

Posted by on 31 January 2014 - 08:12 AM

Just some thoughts....




Sounds reasonable.  Might be worthwhile to consider that you might need to signal when effects begin/end and design with that in mind for the future.




In order to only apply the visual aspect of your multiple effects once, you could decouple the visual aspect from the logical effect.  This means that each logical effect references a visualFxId or handle that is a lookup into a set of visual effects that can be applied.  As a part of the update pass, you simply store a set<visualFxId> and that way if multiple logical effects reference the same visual effect but several logical effects have ended but others continue to express the same visual, you get the visual continuing to be applied. 


The visuals will likely need to be updated each frame.  Many times visuals include animation blends, particles, and other aspects to give the desired affect on the screen and thus do necessitate per-frame updates to keep them synchronized.




This ultimately can be done in whatever way you find that works best.  I prefer to have effect information either included in a game object's update packet or at least have all the effect information packed together in it's own packet.  The client should only be applying network game object updates at a predefined point in the main loop, so having that data sent in a single packet does make sure that all effect values are consistent at that point in time.  If the client has a set of lookup tables that give it pertinent information on how to render stuff, you can easily pack your effect data into a few bytes per effect and leave it up to the client to render accordingly.  Since we're talking about just associating icons and effects in some buff/debuff bar, animations, etc.  These are low priority stuff that make no sense for the server to manage other than providing the least amount of state to the client.




If you treat your input stream like a queue, you can easily have some logic that when a stun/root effect has been applied, it registers an input stream listener that has a higher priority than the normal player's input stream listener.  This effect listener essentially would consume the input necessarily to eliminate any effect of movement until the effect has ended.  Once the effect ends, it unregisters and the input stream queue goes about it's business, being consumed by the player's input stream listener. 

#5125406 Component based game and component communication

Posted by on 21 January 2014 - 11:39 AM

I haven't worked much with Qt but perhaps you might want to check out realXtend tundra core over on github.  It uses Qt as a basis for their windowing framework and perhaps it could give you some insight on how they handled integrating the rendering API with the Qt framework. 


As for input, that sounds reasonable.  You essentially want to cache the OS input events, update your game's input state at a predefined point based on the cached inputs you've detected and then dispatch/poll the game's input state as needed.  As I believe I may have stated before, the important part here is the cache because if the key is down at the start of the frame, all systems during that frame should see the key's state as down to avoid having some systems to see key state differently than others, leading to obscure behaviors.

#5125090 3D Game Engine Questions?

Posted by on 20 January 2014 - 11:25 AM

1.How would a rendering engine work like or be designed,would it have a set of classes that manage meshes and decides how they are   rendered through a customized abstraction layer and what would be a bunch of good practices for creating a abstraction layer?


First off, OpenGL and DirectX are merely a set of APIs that interface with the graphics pipeline.  So a 3d engine basically begins by wrapping basic constructs of these APIs and applying layer upon layer of abstraction.  For example, OGRE3D offers ways to create Vertex and Index buffers regardless of the rendering API being used since they abstract the DirectX and OpenGL API from the user.  Then they offer higher-level classes to perform various things that are pretty common, such as a ManualObject to that wraps creating these vertex/index buffers and pushing them to the GPU.  All a user of a ManualObject needs to do is feed the class the vertices & indices.



3.How would you handle input precisely? What i mean is how do you specifically program the input to work properly, would you use booleans to determine which key is pressed?


First off, Input is generally platform and language specific.  Depending on the target platform and language, you'll basically have a wrapper that listens for input signals.  You then need to turn those signals into some "state" which for keyboards is typically an array that is either 1 (pressed) or 0 (released).  For analog type input such as a mouse, you'll need a bit more structure to how you store the state but inevitability it's similar.  


The most important aspect with input is that you need to make sure that however you handle state, that you merely capture it and dispatch it at predefined points in the game loop rather than dispatching it immediately when it happens.  This makes sure that state remains valid throughout a single frame rather than having some parts of your game behave as if a key wasn't pressed and other parts of the same frame seeing the key as pressed.


Keyboard and mouse events are captured whenever dispatched by the platform OS and the input system caches them.  At the top of the frame, those events are dispatched to two important systems in the following order.

  1. GUI 
  2. Action 

This allows input that could be affecting the GUI (such as typing into a textbox) to get first dibs on saying the input was handled.  If it was handled, input doesn't get dispatched to the other layers.  If the input isn't handled, it gets dispatched to the action system where it can turn something like the 'W' key into a MoveForwardAction.  Our input system also maintains an internal state table of these events at the top of the frame so that systems that would rather poll for input state (e.g. is key 'W' pressed or not) can do so and doesn't have to concern themselves with actions or events.  



4.How are physics applied to a mesh? Is there something called a rigid Dynamic Body which basically is the same shape as the mesh and it covers collision detection and determines which part of the mesh collides with other objects?


I personally prefer to simply leverage an existing physics library and hook into their simulation step.  Generally speaking, most physics implementations require that you first determine is your object a Rigid Body or Soft Body.  Then you assign a shape to the physics object (box/capsule/sphere/mesh/custom).  Then when the simulation is ticked, you can query the simulation to determine what objects collided versus which ones moved and update your own scene objects accordingly.



5.How is it all combined into game logic? How would you combine 3D Graphics,Input,Sound and Physics together to create a playable actor?


6.How does a game loop work? Say you have a game loop and you call some events in the game loop, would you have to update the game loop everytime?


There are usually two approaches to gluing these things together and it depends usually on the game's complexity.  


For a simple game, a GameObject hierarchy that relies on inheritance will work just fine.  You have some GameObject that you begin to split into things like a Player, NPC, Enemy, etc and go from there.  But as your game's complexity grows, you will start to see pitfalls of this approach.


For more complex games, it's better to favor composition over inheritance hierarchies.  You begin to decompose your game objects into bits and pieces of functionality.  Then you construct your game objects as though they're a container of these pieces of functionality.  If you read up on Entity/Component or Component-based systems, you'll start to get an idea of how powerful composition can be in a GameObject system over the traditional approach above.


Lastly, a program by it's very nature is a 'loop' of sorts, regardless of whether it carries out its set of operations a single time or repeats itself until some trigger constitutes that the execution must end.  


In a game, this loop basically dictates an order of operations that must be done to initialize the game, the operations that are repeated over and over such as 1) gather input 2) update logic 3) render to the back buffer 4) swap buffers 5) perform post frame operations and lastly the set of operations to perform cleanup.  Hitting the escape key is captured during step 1 and some system says hey set your game loop's stop variable.  Then steps 2-5 happen and when the top of the loop checked the stop variable, it exits the loop.


Hope all that helps.

#5125047 Component based game and component communication

Posted by on 20 January 2014 - 07:51 AM

An enum to identify specific class types is one acceptable way and with C++11, they are completely type-safe with the enum class.  The problem though is they aren't very extendable and any change in the enum value-set imposes a recompile of all sources that include the enum.  Instead, I tend to prefer the following instead:

class Component {
  //! friend management classes
  template<typename T> friend class ComponentPool;
  friend class EntityManager;
  //! member variables
  ComponentId mId; // unique id + version
  ComponentType mType; // type of component
  EntityId mEntityId; // unique entity id + version
  Component(ComponentType type) : mType(type), mId(INVALID_COMPONENT_ID), mEntityId(INVALID_ENTITY_ID) {}
  virtual ~Component() {}
  inline ComponentType GetType() const { return mType; }
  inline ComponentId GetId() const { return mId; }
  inline EntityId GetEntityId() const { return mEntityId; }

class DerivedComponent : public Component {
  static const ComponentType TYPE = 9;
  DerivedComponent() : Component(TYPE) {}
  virtual ~DerivedComponent() {}

All I do is keep track of what component's have what identifiers and just make sure not to reuse.  In the event I do reuse a component type identifier, the component management system will assert when two components attempt to register themselves with the same identifier.  Once you have something workable, you can easily consider creating some macros such as DECLARE_COMPONENT(derived, parent, identifier) and use those macros not only to create your default constructor, destructor, and static type variables, but any other RTTI information needed.  Various parts of the RTTI could be enabled for debug/editor builds and disabled in release retail builds.

#5124932 Component based game and component communication

Posted by on 19 January 2014 - 05:09 PM

crancran, can you elaborate on your listener stuff? What kind of code would register as a listener? What are you talking about when you mention queues?


There should be a series of systems (in an explicit order) that get updated one after the other. I'm trying to figure out how your listener/queue stuff fits into that.

There is often an interface that perhaps looks something like the following:

class Updatable {
  virtual ~Updatable(void) {}
  //! Called at the top of the frame
  virtual void OnPreUpdate(float dt) {}
  //! Called at the bottom of the frame.
  virtual void OnPostUpdate(float dt) {}
  //! Called for each physics/fixed step
  virtual void OnFixedUpdate() {}
  //! Called on interpolated update, pre-render.
  virtual void OnUpdate(float dt) {}

Since many objects are only interested in specific update stages and not necessarily all, its easy to create multiple buckets for each update type (pre/post/fixed/default). Then when the main loop is at that point during it's single tick, a specific list of Updatables that are interested in that update pass can be triggered without calling every Updatable in the system only to invoke an empty method handler.  Therefore, queues equate to the pre/psot/fixed/default update stages of which only Updatables that have concrete implementations for those stages are in those lists.  


The second part is making sure Updatables are called in a deterministic order; which is where the priority value comes into play.  Since priorities are being applied at the time of adding an updatable listener to the various queues/buckets/stages, the same class may have their pre-update called before another class but have their fixed-update called after if dependencies require that type of order.


I hope that is a clearer explanation, so perhaps now you can better understand what would register themselves as a listener.  It could be anything from core engine components to application specific user code.  

#5124587 Component based game and component communication

Posted by on 17 January 2014 - 09:04 PM

If I have a render system and a render interface, as you described, does the render system have only one component where the render interface pointer is stored? What stores the component in a render system, maybe I could store the mesh string, and the name of the entity?


It ultimately depends on your design needs really.  I've seen some component hierarchies where Component gets derived into a Renderable and then that class is further derived into various types of renderables for things such as Lights, Terrain, Meshes, and numerous other render system specific objects.  But that's only one way to approach it.  


Now how components are associated to various subsystems is somewhat dependent upon taste.  Some might prefer to combine all the 3D renderables into a single system that updates them in a predefined order, but in my opinion that is skirting the lines of not following the SoC and SRP rules of programming.  I tend to prefer splitting them into their own various subsystems and have them interact with the low-level 3D render system.


As far as where components are stored, we follow a similar concept like Burnt_Fyr where components are stored in various arrays that are external the systems.  This helps in a number of ways because we actually treat each concrete component like it's a database table of sorts where each property of the component is basically considered a column in the said table.  


In some systems, its optimal enough for them to use the component database directly each frame and perform whatever updates are need but for others a more optimal iteration process is necessary.  For those, we generally use another layer of abstraction where the system has some internal structs and we replicate the necessary information in conjunction with events triggering state changes to keep the update loop's impact minimized.


I think, that I have to ensure that the render system runs after each other system, how can I do that? I want the game class to loop trough every system and make an update, but the render system should be the last, because if it run before the position system, it would not move the entity...


As others have said, update order really should be deterministic because it will otherwise be a source of significant pain points and subtle bugs.  If you're looking to create a somewhat flexible solution and trade a tad bit of performance for it, you can very easily use the observer pattern in conjunction with either a priority system or a combinataion of priority and listener buckets.

bool Framework::AddFrameworkListener(IFrameworkListener* listener, int queue = eListenerQuueDefault, int priority = 0 /* default */)
  /* lookup queue */
  /* add listener to the specific queue, ordered by priority */

void Framework::Update(float dt)
  /* at specific stage */
  auto& listeners = mListenerQueues[eListenerQueueStage1];
  std::for_each(listeners.begin(), listeners.end(), [=](IFrameworkListener* l) {
    l->OnStage1(/* pass whatever */);

The benefit here is that a framework listener can register a listener class for one, several, or even all queues based on how the listener gets registered.  Various exposed listener API methods can be exposed to make registration easier.


In our engine, we use an augmented flavor of the above approach to keep the framework plug-and-play and to be able to inject code into the main loop at any point, even before core framework systems if needed.  

#5123912 Handling one-off events

Posted by on 15 January 2014 - 10:35 AM

the whole game could be (and most of the time should be) run solely by events.


While it's possible to create a fully event-driven game; there are often times where that hammer just isn't the right solution for what you're trying to do.  Polling shared state can just as easily be used to make decisions about what to do next.  For example:

void PlayerSoundSystem::Update(float dt)
  bool stopped = false;

  //! Get the current position and check if the player is actively moving
  const Vector3& position = GetPlayerCurrentPosition();
    stopped = true;

  //! Compute the distance moved and store reference to current position.
  mDistanceMoved += (mPreviousPosition + position) * dt;
  mPreviousPosition = position;
  //! If distance equals or exceeds 1 world unit, fire.
  if(mDistanceMoved >= ONE_WORLD_UNIT)
    const GroundType& groundType = GetCurrentGroundMaterialType();

    //! If sound actively playing now, exit routine.
    //! This will allow mDistanceMoved to continue to accrual until
    //! either the player stops moving or the current sound ends and
    //! will then be replayed again.  It also makes sure the ground 
    //! types haven't changed as a new sound will then be necessary.
      if(mPreviousGroundType == groundType)

      mPreviousGroundType = groundType;

      case GroundType::WATER:
      case GroundType::SNOW:
      /* other options */

    mDistanceMoved = 0; /* reset when player stops moving */

#5123886 Component based game and component communication

Posted by on 15 January 2014 - 08:57 AM

I think, I will use a observer pattern event system on the component, which notifies on value change (position changed) some systems. But instead, of switch to the notified system, and process the event, I only add the entity to a queue, and in the next update the entity will be processed.


Your framework should be capable of support both queued events (asynchronous delayed delivery) and immediate events (synchronous delivery).  Depending on the context of the event that occurred, you can determine which makes the most logical sense.  There are often systems that run after other systems and if the prior systems emit an event that the subsequent systems are interested in, delaying it by queuing it up only attributes to having what is commonly called frame lag.


You talk about low level systems and components. I must confess that I don't know exactly how the architecture above the "low level" systems look like. I thought that I have my components -> systems -> one entity manager -> application. But this is wrong or? My application class would startup and initialize the entity manager, render libary, ai libary and such things. The entity manager gives the pointer to the systems. Do I need another level of "high level" systems? Don't know how the common way is.


One of the worse things you can succumb to is over engineering things.  There is no real right or wrong way to do it.  If you took various game engines and compared their sources, while you'll find some commonalities between how they separate certain things, they generally are coupled in very different ways that made sense to the developers or architects for those projects.


If you think about a car engine or a computer for a moment; there are a multitude of pieces that come together to make the final product.  Those pieces are generally put together in a specific order and some pieces rely on other pieces to exist and expose some "connection" or "joint" in order to apply the next piece, right?  A software program of any kind follow similar practices.


I generally separate the engine into several low-level building blocks such as Audio, AI, Graphics, Input, Memory, Networking, Physics, Platform/IO, UI, and others.  


If we take Audio for example, I might decide to use some third-party audio library such as FMOD or OpenAL  I then generally create several classes that wrap the library and expose an API that should I opt to change the third-party library to another, it would have minimal impact. This is where coding to Interfaces rather than implementation can really make a significant difference.  At this point, we've created the base low-level things necessary for audio.  


The next step is to build on this low-level audio framework and expose various aspects in a plug-n-play type of fashion.  This is where the entity framework & components behind to come into play.  So I develop a set of components such as an AudioEmitterComponent which is responsible for emitting various sounds along with an AudioListenerComponent that listens tongue.png.  Depending on your needs, you might find a few other type of components you might want to expose with various attributes for the audio system but you get the idea.


The final step to polish off the Audio framework is to expose a series of system classes that are responsible for managing various aspects of audio.  We generally first create a plethora of systems that handle a very specific aspect of audio.  For example, a system that maintains a continual but non-overlapping playback of audio clips based on events dispatched from the combat system.  We have an audio system responsible for zone ambient music so that as a player transitions from one zone to another, the ambient sound from the prior zone fades out and the new zone fades in.  Once we have all those building blocks for the various plethora of audio concepts we want, we begin to see where those systems can be refactored into a common system or smaller number of systems that perhaps does a multitude of things within each one.  It might also lead us down a path to expose a new component for specific purposes and expose it through a custom audio system.


Anywho, the point here is to start at the lowest level and build atop of the previous layer.  If you treat creating your game in layers, you'll start to understand how easy it becomes to change things when you want because you've applied sufficient abstraction to do so and decoupled things in a way that supports continual change and maintenance of your codebase without necessarily throwing it away and starting over.  Now don't expect all this overnight of course, this comes with many years of trial and error.  So don't hesitate to throw things away and start fresh if you believe the design is flawed and could be made better.  Iterative programming will lead to better code with time smile.png.


TLDR; You wrap your third party library with a set of classes and then you wrap those classes with the entity systems that operate on components.  By manipulating components from the outside in turn influences how the entity system manipulates the internal third party libraries such as OpenAL/FMOD/OGRE/etc.  I use the term wrap lightly in this context because generally some framework allocates your OpenAL/FMOD/OGRE wrapper and then passes the wrapper class reference to your entity systems and then interact with the third party library through your wrapper.  Make sense?

#5123800 Component design and game class design

Posted by on 15 January 2014 - 12:14 AM

You need to get OOP out of your mind.


That isn't necessarily true.  There is nothing wrong with applying OOP concepts to a component hierarchy.  For example, it isn't uncommon to see a ColliderComponent that is derived from a Component and then have various implementations such as BoxCollider, SphereCollider, MeshCollider, and CapsuleCollider which are derived classes of the ColliderComponent.  Shallow hierarchies are fine if they are well contained within the scope of a specific feature. 


I do agree that perhaps Input should be a single component.  It can be influenced by some AIController, CharacterController, or NetworkController, all of which might be a subclass of Controller.  As for the renderables, I believe it all depends on the engine.  If your lights, planes, meshes, and other render system specific exposed objects share any common ground with one another, having a base renderable class doesn't present an issue.  


As long as the hierarchy is kept shallow, it OOP works quite well.