Jump to content

  • Log In with Google      Sign In   
  • Create Account


crancran

Member Since 14 Oct 2009
Offline Last Active Jun 04 2014 11:00 PM
-----

#5135214 Implementing an Entity Component System in C++

Posted by crancran on 27 February 2014 - 05:03 PM

 

Ok, so I wonder how I should implement the systems. Since there is no reason to instanciate them (it would make no sense), the methods should be static, don't you think? Or better, I could use a singleton to make sure there is only one instance of each system.

 

Honestly, introducing singletons or static anything in this the mix is a recipe for trouble.  You're only making the solution far more rigid and brittle at the same time which will be inflexible and hard to maintain long-term.  

 

Additionally, systems should be instantiated because they'll likely need to maintain internal state and not introducing singletons or static state implies you can easily run two copies of the system on perhaps different entity manager data sets or different game states and perform multiple simulations in parallel or sequentially without any trouble.

 

Still, I hesitate to implement it in my game for the reasons below:
I like my architecture to be clear, not to be a mix of several design patterns which are supposed to be "contestants" (e.g. Map would be a traditional class whereas Character would be an entity). It's quite confusing I think.
With the OOP approach, the "skeleton" of the game is well defined by the classes: you read the Character class and you know everything about it. Whereas with the ECS approach, a Character is divided in several files (1 entity, x components and x systems) and you don't know where it all gets together. However, I agree the code is more modulable with an ECS.
So I think for now I'll stick with the "old" OOP approach. I'm sure it can work flawlessly as a lot of games don't use an ECS and they work well.

 

A clear architecture has nothing to do with the design patterns which it uses.  In fact, an architecture tends to be cleaner when the right design pattern is chosen for the problem at hand rather than trying to shoehorn something that doesn't fit due to some bias or other factor.  Opting to use design pattern A for part of a problem and design pattern B for another portion with some design pattern that marries the two is actually very common place in programming and in my experience generally carries considerably more benefits than it does consequences.

 

I prefer to consider that both a Map and the Player are stand alone classes pertinent to the game engine, core classes if you will.  I then give the Player class an unsigned integer property that is basically the identifier to some game object in the game object system that represents the actual player.  The benefit here is that if the engine operates on the exposed API of the Player class, the engine doesn't care if its an entity or not, nor does it care about the components which make up the player.  With all that abstracted, you can change implementation details of the Player with minimal impact to the engine/game itself. 

 

And as you can see, such an approach does follow your idea of a Character class that knows everything about itself.  The only difference is that rather than the state of a Character being stored in that class specifically, the Character class queries various systems to get it's state based on the method call invoked.

 

One of the biggest reasons why these systems are great is the focus on data-oriented design.  You store data that you plan to operate on at the same time together in logical chunks, utilizing cache friendly operations, thus boosting performance.  Because you're grouping data oriented things together and decomposing state, you also gain the benefit that it becomes easier to piece features together and morph object state from A to B.  Of course, all this implies a different mindset to understand that you're updating game state in many many tiny stages throughout a single update tick for any given object.  

 

But nothing of the above says you cannot use ECS in conjunction with OOP.  Both are wonderful tools that can and do work well together if you follow the simple rules of programming with separation of concerns, single responsibility principals and data-oriented design goals.

 




#5127729 Need help with effects system

Posted by crancran on 31 January 2014 - 08:12 AM

Just some thoughts....

 

Processing

 

Sounds reasonable.  Might be worthwhile to consider that you might need to signal when effects begin/end and design with that in mind for the future.

 

Visuals

 

In order to only apply the visual aspect of your multiple effects once, you could decouple the visual aspect from the logical effect.  This means that each logical effect references a visualFxId or handle that is a lookup into a set of visual effects that can be applied.  As a part of the update pass, you simply store a set<visualFxId> and that way if multiple logical effects reference the same visual effect but several logical effects have ended but others continue to express the same visual, you get the visual continuing to be applied. 

 

The visuals will likely need to be updated each frame.  Many times visuals include animation blends, particles, and other aspects to give the desired affect on the screen and thus do necessitate per-frame updates to keep them synchronized.

 

Networking

 

This ultimately can be done in whatever way you find that works best.  I prefer to have effect information either included in a game object's update packet or at least have all the effect information packed together in it's own packet.  The client should only be applying network game object updates at a predefined point in the main loop, so having that data sent in a single packet does make sure that all effect values are consistent at that point in time.  If the client has a set of lookup tables that give it pertinent information on how to render stuff, you can easily pack your effect data into a few bytes per effect and leave it up to the client to render accordingly.  Since we're talking about just associating icons and effects in some buff/debuff bar, animations, etc.  These are low priority stuff that make no sense for the server to manage other than providing the least amount of state to the client.

 

Misc

 

If you treat your input stream like a queue, you can easily have some logic that when a stun/root effect has been applied, it registers an input stream listener that has a higher priority than the normal player's input stream listener.  This effect listener essentially would consume the input necessarily to eliminate any effect of movement until the effect has ended.  Once the effect ends, it unregisters and the input stream queue goes about it's business, being consumed by the player's input stream listener. 




#5125406 Component based game and component communication

Posted by crancran on 21 January 2014 - 11:39 AM

I haven't worked much with Qt but perhaps you might want to check out realXtend tundra core over on github.  It uses Qt as a basis for their windowing framework and perhaps it could give you some insight on how they handled integrating the rendering API with the Qt framework. 

 

As for input, that sounds reasonable.  You essentially want to cache the OS input events, update your game's input state at a predefined point based on the cached inputs you've detected and then dispatch/poll the game's input state as needed.  As I believe I may have stated before, the important part here is the cache because if the key is down at the start of the frame, all systems during that frame should see the key's state as down to avoid having some systems to see key state differently than others, leading to obscure behaviors.




#5125090 3D Game Engine Questions?

Posted by crancran on 20 January 2014 - 11:25 AM

1.How would a rendering engine work like or be designed,would it have a set of classes that manage meshes and decides how they are   rendered through a customized abstraction layer and what would be a bunch of good practices for creating a abstraction layer?

 

First off, OpenGL and DirectX are merely a set of APIs that interface with the graphics pipeline.  So a 3d engine basically begins by wrapping basic constructs of these APIs and applying layer upon layer of abstraction.  For example, OGRE3D offers ways to create Vertex and Index buffers regardless of the rendering API being used since they abstract the DirectX and OpenGL API from the user.  Then they offer higher-level classes to perform various things that are pretty common, such as a ManualObject to that wraps creating these vertex/index buffers and pushing them to the GPU.  All a user of a ManualObject needs to do is feed the class the vertices & indices.

 

 

3.How would you handle input precisely? What i mean is how do you specifically program the input to work properly, would you use booleans to determine which key is pressed?

 

First off, Input is generally platform and language specific.  Depending on the target platform and language, you'll basically have a wrapper that listens for input signals.  You then need to turn those signals into some "state" which for keyboards is typically an array that is either 1 (pressed) or 0 (released).  For analog type input such as a mouse, you'll need a bit more structure to how you store the state but inevitability it's similar.  

 

The most important aspect with input is that you need to make sure that however you handle state, that you merely capture it and dispatch it at predefined points in the game loop rather than dispatching it immediately when it happens.  This makes sure that state remains valid throughout a single frame rather than having some parts of your game behave as if a key wasn't pressed and other parts of the same frame seeing the key as pressed.

 

Keyboard and mouse events are captured whenever dispatched by the platform OS and the input system caches them.  At the top of the frame, those events are dispatched to two important systems in the following order.

  1. GUI 
  2. Action 

This allows input that could be affecting the GUI (such as typing into a textbox) to get first dibs on saying the input was handled.  If it was handled, input doesn't get dispatched to the other layers.  If the input isn't handled, it gets dispatched to the action system where it can turn something like the 'W' key into a MoveForwardAction.  Our input system also maintains an internal state table of these events at the top of the frame so that systems that would rather poll for input state (e.g. is key 'W' pressed or not) can do so and doesn't have to concern themselves with actions or events.  

 

 

4.How are physics applied to a mesh? Is there something called a rigid Dynamic Body which basically is the same shape as the mesh and it covers collision detection and determines which part of the mesh collides with other objects?

 

I personally prefer to simply leverage an existing physics library and hook into their simulation step.  Generally speaking, most physics implementations require that you first determine is your object a Rigid Body or Soft Body.  Then you assign a shape to the physics object (box/capsule/sphere/mesh/custom).  Then when the simulation is ticked, you can query the simulation to determine what objects collided versus which ones moved and update your own scene objects accordingly.

 

 

5.How is it all combined into game logic? How would you combine 3D Graphics,Input,Sound and Physics together to create a playable actor?

 

6.How does a game loop work? Say you have a game loop and you call some events in the game loop, would you have to update the game loop everytime?

 

There are usually two approaches to gluing these things together and it depends usually on the game's complexity.  

 

For a simple game, a GameObject hierarchy that relies on inheritance will work just fine.  You have some GameObject that you begin to split into things like a Player, NPC, Enemy, etc and go from there.  But as your game's complexity grows, you will start to see pitfalls of this approach.

 

For more complex games, it's better to favor composition over inheritance hierarchies.  You begin to decompose your game objects into bits and pieces of functionality.  Then you construct your game objects as though they're a container of these pieces of functionality.  If you read up on Entity/Component or Component-based systems, you'll start to get an idea of how powerful composition can be in a GameObject system over the traditional approach above.

 

Lastly, a program by it's very nature is a 'loop' of sorts, regardless of whether it carries out its set of operations a single time or repeats itself until some trigger constitutes that the execution must end.  

 

In a game, this loop basically dictates an order of operations that must be done to initialize the game, the operations that are repeated over and over such as 1) gather input 2) update logic 3) render to the back buffer 4) swap buffers 5) perform post frame operations and lastly the set of operations to perform cleanup.  Hitting the escape key is captured during step 1 and some system says hey set your game loop's stop variable.  Then steps 2-5 happen and when the top of the loop checked the stop variable, it exits the loop.

 

Hope all that helps.




#5125047 Component based game and component communication

Posted by crancran on 20 January 2014 - 07:51 AM

An enum to identify specific class types is one acceptable way and with C++11, they are completely type-safe with the enum class.  The problem though is they aren't very extendable and any change in the enum value-set imposes a recompile of all sources that include the enum.  Instead, I tend to prefer the following instead:

class Component {
  //! friend management classes
  template<typename T> friend class ComponentPool;
  friend class EntityManager;
  //! member variables
  ComponentId mId; // unique id + version
  ComponentType mType; // type of component
  EntityId mEntityId; // unique entity id + version
public:
  Component(ComponentType type) : mType(type), mId(INVALID_COMPONENT_ID), mEntityId(INVALID_ENTITY_ID) {}
  virtual ~Component() {}
  inline ComponentType GetType() const { return mType; }
  inline ComponentId GetId() const { return mId; }
  inline EntityId GetEntityId() const { return mEntityId; }
};

class DerivedComponent : public Component {
public:
  static const ComponentType TYPE = 9;
  DerivedComponent() : Component(TYPE) {}
  virtual ~DerivedComponent() {}
};
 

All I do is keep track of what component's have what identifiers and just make sure not to reuse.  In the event I do reuse a component type identifier, the component management system will assert when two components attempt to register themselves with the same identifier.  Once you have something workable, you can easily consider creating some macros such as DECLARE_COMPONENT(derived, parent, identifier) and use those macros not only to create your default constructor, destructor, and static type variables, but any other RTTI information needed.  Various parts of the RTTI could be enabled for debug/editor builds and disabled in release retail builds.




#5124932 Component based game and component communication

Posted by crancran on 19 January 2014 - 05:09 PM

crancran, can you elaborate on your listener stuff? What kind of code would register as a listener? What are you talking about when you mention queues?

 

There should be a series of systems (in an explicit order) that get updated one after the other. I'm trying to figure out how your listener/queue stuff fits into that.

There is often an interface that perhaps looks something like the following:

class Updatable {
  virtual ~Updatable(void) {}
  //! Called at the top of the frame
  virtual void OnPreUpdate(float dt) {}
  //! Called at the bottom of the frame.
  virtual void OnPostUpdate(float dt) {}
  //! Called for each physics/fixed step
  virtual void OnFixedUpdate() {}
  //! Called on interpolated update, pre-render.
  virtual void OnUpdate(float dt) {}
};

Since many objects are only interested in specific update stages and not necessarily all, its easy to create multiple buckets for each update type (pre/post/fixed/default). Then when the main loop is at that point during it's single tick, a specific list of Updatables that are interested in that update pass can be triggered without calling every Updatable in the system only to invoke an empty method handler.  Therefore, queues equate to the pre/psot/fixed/default update stages of which only Updatables that have concrete implementations for those stages are in those lists.  

 

The second part is making sure Updatables are called in a deterministic order; which is where the priority value comes into play.  Since priorities are being applied at the time of adding an updatable listener to the various queues/buckets/stages, the same class may have their pre-update called before another class but have their fixed-update called after if dependencies require that type of order.

 

I hope that is a clearer explanation, so perhaps now you can better understand what would register themselves as a listener.  It could be anything from core engine components to application specific user code.  




#5124587 Component based game and component communication

Posted by crancran on 17 January 2014 - 09:04 PM

If I have a render system and a render interface, as you described, does the render system have only one component where the render interface pointer is stored? What stores the component in a render system, maybe I could store the mesh string, and the name of the entity?

 

It ultimately depends on your design needs really.  I've seen some component hierarchies where Component gets derived into a Renderable and then that class is further derived into various types of renderables for things such as Lights, Terrain, Meshes, and numerous other render system specific objects.  But that's only one way to approach it.  

 

Now how components are associated to various subsystems is somewhat dependent upon taste.  Some might prefer to combine all the 3D renderables into a single system that updates them in a predefined order, but in my opinion that is skirting the lines of not following the SoC and SRP rules of programming.  I tend to prefer splitting them into their own various subsystems and have them interact with the low-level 3D render system.

 

As far as where components are stored, we follow a similar concept like Burnt_Fyr where components are stored in various arrays that are external the systems.  This helps in a number of ways because we actually treat each concrete component like it's a database table of sorts where each property of the component is basically considered a column in the said table.  

 

In some systems, its optimal enough for them to use the component database directly each frame and perform whatever updates are need but for others a more optimal iteration process is necessary.  For those, we generally use another layer of abstraction where the system has some internal structs and we replicate the necessary information in conjunction with events triggering state changes to keep the update loop's impact minimized.

 

I think, that I have to ensure that the render system runs after each other system, how can I do that? I want the game class to loop trough every system and make an update, but the render system should be the last, because if it run before the position system, it would not move the entity...

 

As others have said, update order really should be deterministic because it will otherwise be a source of significant pain points and subtle bugs.  If you're looking to create a somewhat flexible solution and trade a tad bit of performance for it, you can very easily use the observer pattern in conjunction with either a priority system or a combinataion of priority and listener buckets.

bool Framework::AddFrameworkListener(IFrameworkListener* listener, int queue = eListenerQuueDefault, int priority = 0 /* default */)
{
  /* lookup queue */
  /* add listener to the specific queue, ordered by priority */
}

void Framework::Update(float dt)
{
  /* at specific stage */
  auto& listeners = mListenerQueues[eListenerQueueStage1];
  std::for_each(listeners.begin(), listeners.end(), [=](IFrameworkListener* l) {
    l->OnStage1(/* pass whatever */);
  });
  ...
}

The benefit here is that a framework listener can register a listener class for one, several, or even all queues based on how the listener gets registered.  Various exposed listener API methods can be exposed to make registration easier.

 

In our engine, we use an augmented flavor of the above approach to keep the framework plug-and-play and to be able to inject code into the main loop at any point, even before core framework systems if needed.  




#5123912 Handling one-off events

Posted by crancran on 15 January 2014 - 10:35 AM

the whole game could be (and most of the time should be) run solely by events.

 

While it's possible to create a fully event-driven game; there are often times where that hammer just isn't the right solution for what you're trying to do.  Polling shared state can just as easily be used to make decisions about what to do next.  For example:

void PlayerSoundSystem::Update(float dt)
{
  bool stopped = false;

  //! Get the current position and check if the player is actively moving
  const Vector3& position = GetPlayerCurrentPosition();
  if(!GetIsPlayerCurrentlyMoving())
    stopped = true;

  //! Compute the distance moved and store reference to current position.
  mDistanceMoved += (mPreviousPosition + position) * dt;
  mPreviousPosition = position;
  
  //! If distance equals or exceeds 1 world unit, fire.
  if(mDistanceMoved >= ONE_WORLD_UNIT)
  {
    const GroundType& groundType = GetCurrentGroundMaterialType();

    //! If sound actively playing now, exit routine.
    //! This will allow mDistanceMoved to continue to accrual until
    //! either the player stops moving or the current sound ends and
    //! will then be replayed again.  It also makes sure the ground 
    //! types haven't changed as a new sound will then be necessary.
    if(IsPlayingSoundNow())
    {
      if(mPreviousGroundType == groundType)
        return;

      StopPlayingSound();
      mPreviousGroundType = groundType;
    }

    switch(groundType)
    {
      case GroundType::WATER:
        PlaySound(PLAYER_SWIMMING_SOUND_FILENAME);
        break;
      case GroundType::SNOW:
        PlaySound(PLAYER_WALKING_SNOW_FILENAME);
        break;
      /* other options */
      default:
        PlaySound(PLAYER_WALKING_DEFAULT_FILENAME);
        break;
    };
  }

  if(stopped)
  {
    mDistanceMoved = 0; /* reset when player stops moving */
    StopPlayingSound();
  }
}




#5123886 Component based game and component communication

Posted by crancran on 15 January 2014 - 08:57 AM

I think, I will use a observer pattern event system on the component, which notifies on value change (position changed) some systems. But instead, of switch to the notified system, and process the event, I only add the entity to a queue, and in the next update the entity will be processed.

 

Your framework should be capable of support both queued events (asynchronous delayed delivery) and immediate events (synchronous delivery).  Depending on the context of the event that occurred, you can determine which makes the most logical sense.  There are often systems that run after other systems and if the prior systems emit an event that the subsequent systems are interested in, delaying it by queuing it up only attributes to having what is commonly called frame lag.

 

You talk about low level systems and components. I must confess that I don't know exactly how the architecture above the "low level" systems look like. I thought that I have my components -> systems -> one entity manager -> application. But this is wrong or? My application class would startup and initialize the entity manager, render libary, ai libary and such things. The entity manager gives the pointer to the systems. Do I need another level of "high level" systems? Don't know how the common way is.

 

One of the worse things you can succumb to is over engineering things.  There is no real right or wrong way to do it.  If you took various game engines and compared their sources, while you'll find some commonalities between how they separate certain things, they generally are coupled in very different ways that made sense to the developers or architects for those projects.

 

If you think about a car engine or a computer for a moment; there are a multitude of pieces that come together to make the final product.  Those pieces are generally put together in a specific order and some pieces rely on other pieces to exist and expose some "connection" or "joint" in order to apply the next piece, right?  A software program of any kind follow similar practices.

 

I generally separate the engine into several low-level building blocks such as Audio, AI, Graphics, Input, Memory, Networking, Physics, Platform/IO, UI, and others.  

 

If we take Audio for example, I might decide to use some third-party audio library such as FMOD or OpenAL  I then generally create several classes that wrap the library and expose an API that should I opt to change the third-party library to another, it would have minimal impact. This is where coding to Interfaces rather than implementation can really make a significant difference.  At this point, we've created the base low-level things necessary for audio.  

 

The next step is to build on this low-level audio framework and expose various aspects in a plug-n-play type of fashion.  This is where the entity framework & components behind to come into play.  So I develop a set of components such as an AudioEmitterComponent which is responsible for emitting various sounds along with an AudioListenerComponent that listens tongue.png.  Depending on your needs, you might find a few other type of components you might want to expose with various attributes for the audio system but you get the idea.

 

The final step to polish off the Audio framework is to expose a series of system classes that are responsible for managing various aspects of audio.  We generally first create a plethora of systems that handle a very specific aspect of audio.  For example, a system that maintains a continual but non-overlapping playback of audio clips based on events dispatched from the combat system.  We have an audio system responsible for zone ambient music so that as a player transitions from one zone to another, the ambient sound from the prior zone fades out and the new zone fades in.  Once we have all those building blocks for the various plethora of audio concepts we want, we begin to see where those systems can be refactored into a common system or smaller number of systems that perhaps does a multitude of things within each one.  It might also lead us down a path to expose a new component for specific purposes and expose it through a custom audio system.

 

Anywho, the point here is to start at the lowest level and build atop of the previous layer.  If you treat creating your game in layers, you'll start to understand how easy it becomes to change things when you want because you've applied sufficient abstraction to do so and decoupled things in a way that supports continual change and maintenance of your codebase without necessarily throwing it away and starting over.  Now don't expect all this overnight of course, this comes with many years of trial and error.  So don't hesitate to throw things away and start fresh if you believe the design is flawed and could be made better.  Iterative programming will lead to better code with time smile.png.

 

TLDR; You wrap your third party library with a set of classes and then you wrap those classes with the entity systems that operate on components.  By manipulating components from the outside in turn influences how the entity system manipulates the internal third party libraries such as OpenAL/FMOD/OGRE/etc.  I use the term wrap lightly in this context because generally some framework allocates your OpenAL/FMOD/OGRE wrapper and then passes the wrapper class reference to your entity systems and then interact with the third party library through your wrapper.  Make sense?




#5123800 Component design and game class design

Posted by crancran on 15 January 2014 - 12:14 AM

You need to get OOP out of your mind.

 

That isn't necessarily true.  There is nothing wrong with applying OOP concepts to a component hierarchy.  For example, it isn't uncommon to see a ColliderComponent that is derived from a Component and then have various implementations such as BoxCollider, SphereCollider, MeshCollider, and CapsuleCollider which are derived classes of the ColliderComponent.  Shallow hierarchies are fine if they are well contained within the scope of a specific feature. 

 

I do agree that perhaps Input should be a single component.  It can be influenced by some AIController, CharacterController, or NetworkController, all of which might be a subclass of Controller.  As for the renderables, I believe it all depends on the engine.  If your lights, planes, meshes, and other render system specific exposed objects share any common ground with one another, having a base renderable class doesn't present an issue.  

 

As long as the hierarchy is kept shallow, it OOP works quite well.




#5123793 Component based game and component communication

Posted by crancran on 14 January 2014 - 11:51 PM

First off, don't be afraid to break parts of a process with the entity system into multiple passes.  

 

Our scene (aka map) loader process loads map metadata.  It's during this process where entity IDs get allocated and whatever metadata references components is used to create the necessary components associated to this entity ID.  At this end of this first phase, the entity is in an inactive state and the components which have been allocated are stored in the various owning subsystems but are flagged as pending commits.  (Think relational database here).

 

Once the entity has been constructed either by using a combination of prefab archetype data and map metadata or simply map metadata, the next step must begin.  This next step is an inspection pass where systems or components (depending on your design) can look into an entity and determine what other stuff exists.  If required, references to other systems or components can be obtained and stored.  

 

Do be mindful of using pointers, particularly if they're not weak_ptr types.  At any moment, any outside source could cause a component to be removed from an entity and if you depend on that component, have stored a raw pointer and do not have checks in place to eliminate the reference, this could lead to unexpected behavior.  I generally prefer either the use of a component handle where a handle is converted to a pointer at the time of access or maintain copies of the data; generally of which the later is usually more cache friendly and gives a slightly higher performance depending on your data structures.

 

Assuming that your game doesn't support a concept such as teleport, generally the physics component's position & orientation will be seeded from the transform during the inspection phase.  This is what makes sure the physics simulation starts out synchronized with the entity's transform data.  If the entity has no transform, then you cannot initialize the physics component, and thus it remains "pending" or "inactive".

 

The next step is to activate your entity, at which point all systems are informed and the components which were initialized & synchronized successfully are moved into an active state and thus the system's update process will begin to manage the state for that component & entity.  Either here or in the prior inspection step is where you could allocate whatever subsystem specific data objects that are needed.  For example, a RigidBody for physics, a Collider for the Collision system, your SceneNode and various other renderable aspects in the render system, etc.  

 

The point is once activation has completed, the entity's component state should be ready to be manipulated by any outside source.

 

When you begin to apply movement to your entity, the force/direction get applied to some movement/velocity component.  When the physics system begins to step it's simulation, it takes all entities with a physics component and have a force/velocity and applies those values to the rigid bodies in the simulation and then steps the physics world simulation.  If collisions are detected, then an event can be fired and any system interested in collisions can react.  If no collisions took place and position/orientation were modified, the physics system can emit a position/orientation update message.  The transform system can receive it and update the entity's position/orientation.

 

Here you have two options:

A) have the systems which manage your scene render objects also listen for physics events and update the scene's transform/orientation as well

B) have the transform system emit a second message after having changed the transform.  

 

Now if your game had a teleport system, your transform system, physics system and all those render object systems might register to be notified upon a TeleportationEvent, then they simply update their position/orientation based on the transport system's knowledge of where you left and where to spawn your character perhaps on a new map or in another zone or area of the current map.

 

One thing I will also caution you on is try to avoid leaking system specific data into your components.  I find often that people will create say a Light component, place all the data regarding how to construct a light in the render system and then have the component hold a pointer or some reference to the render system's Light object implementation.  Instead, I prefer the idea of creating some internal data structure storage where the system holds the Light object's implementation pointer and can relate that back to a component of a specific entity.  In some cases it's been more efficient to marry parts of the component's data with the specific systems implementation pointers and store all that in a specific structure and simply iterate those each frame and use a series of events to replicate changes from the components or other system changes to the internal structure the system maintains at set intervals.

 

As far as to your last question regarding the rendering library, there is really no set way.  I generally code against a set of interfaces and so my engine's framework is much like that of CryEngine where an IRenderer interface is exposed that gives you access to the 3D rendering library.  This class initializes the library, allows me to attach it to a specific window handle to begin rendering, exposes a render method, has support to restart the 3D library, exposes the scene graph, etc.  I simply have the engine framework construct the Renderer class during startup.  I leave the application class responsible for being a thin wrapper around the framework and exposing a series of listener methods and such which can be overwritten based on application needs such as OnActionEvent(), OnKeyEvent(), OnMouseEvent(), OnPreUpdate(), OnUpdateFixed(), OnUpdate(), and OnPreRender(), etc.

 

Feel free to ask any questions, hopefully it gives you a clearer idea.  Just remember that the entity system is an abstraction layer that sits way above the underlying systems in the engine.  It's basically the system or systems which designers in AAA titles interact with in the editors to create these elaborate games.  What happens at the physics, rendering, and other various low-level systems are well below the entity framework but are influenced by "inputs" given to the components.

EDIT:
One thing I forgot to mention above is that once an entity has been activated and you add/remove components from the entity, those changes are once again considered "pending".  This means that another inspection/activation process must be performed on the entity to allow various systems/components to determine whether or not the entity's state continues to satisfy the requirements of some systems and if not cleanup the existing data if it exists and treat the component as though it was just created but couldn't be initialized due to missing dependencies.




#5116486 Making an object accessible by other classes

Posted by crancran on 12 December 2013 - 09:23 AM

If you take rip-off's comment and Nanook's comments together, they're quite spot on here.  It sounds to me as though you've approached separating code into classes using an extreme separation of concerns.  What I might suggest is you have a rendering system that does the main bulk of what it implies, renders.  You have a set of other classes that hold data specific to certain rendering needs such as vertex buffers, index buffers, vertex declarations, shader programs, etc.  

 

If you think about the bigger picture here, your render system and all these classes that hold data can be abstracted one level farther to where you have a common interface the engine exposes for them with implementation specific classes for DX9, DX11, and OpenGL.  Your API asks the render system to allocate you an object, the implementation specific class does so and gives you the object back via interface.  You work with a vertex buffer the same way regardless if its DX9, DX11, OpenGL or w/e.  

 

Now you no longer need to be concerned with passing this device handle everywhere because its centralized, primarily used by the specific render system implementation.  On those corner cases where it makes sense for a separate class to perhaps hold a reference, just pass it as a constructor argument.  

 

This doesn't necessarily blur the lines with separation of concerns and single responsibility principals and keeps your code quite clean.




#5114437 [Component Entity System] Components design

Posted by crancran on 04 December 2013 - 06:44 PM

Make them three component sand another entity the "nature of the beast"-entity and let it own the three with an ownage-component! Terraria the entity has a head and hands, also entites. So each could take damage and Terriaria after lossing hands and head could decide to respawn as a dog..

 

This approach is quote common in games where a boss encounter includes several NPCs that all share a common health pool or where certain fight mechanics shouldn't occur while another mechanic is currently active.




#5093716 Component based architecture and messaging

Posted by crancran on 12 September 2013 - 10:08 PM

I like your architecture and edited my architecture diagram to this:

http://i40.tinypic.com/2wggbuu.png

 

So I think, it's very similar to your description.

 

 

The only thing I'd caution you and possibly others about from your diagram is what you call Render System.  I often consider the rendering system the renderer, the system that interacts with OpenGL or DirectX.  The rendering system is typically something exposed by the engine itself much in the same light Physics, Input, Platform I/O, Networking, and Audio are exposed.  

 

Then you create specialized component subsystems you register with the entity framework that handle specialized behaviors.  This way you can have many component subsystems that perhaps handle specific rendering features, allowing each of those component subsystems to interact with the engine's rendering system.




#5093481 Component based architecture and messaging

Posted by crancran on 11 September 2013 - 11:16 PM

I have thought about a basic architecture, what I need in my game. I want a car to go to different points controlled by AI.

 

So I think I have following low level systems: Render system, Physic system (optional), Position system, AI system, Waypoint system.

High level system would be exactly one Entity sytem, which has references to all the low level systems, because it needs to know everything from above.

 

Is this correct? Is a component system structured this way?

 

I'd say a bit of this is a matter of taste. 

 

There is no right or wrong answer on how to design a component system because while games are starting to shift and ship with some component system like concepts, it's still something that depending on the development team and architect who is at the helm of the design process; it all varies and sometimes significantly.

 

I generally prefer the idea that the entity framework is a core component to the engine.  The entity framework exposes an entity manager along with a component system interface.  The developers create derived classes of the component system interface for whatever logic they want to implement for the designers.  These component systems are then registered with the entity manager at start-up, allowing the entity manager to build a repository of meta data about the available components, properties associated with the components (useful for editors), etc.  

 

The beauty is that with the entity manager having all this metadata at it's finger tips, the engine simply exposes the entity manager to the remainder of the application.  Internally, the entity manager can delegate creation and destruction of components to the appropriate component systems without the developer having to find the right system and call it directly.  Our component system interface is also designed so that the component system can interact with our core engine framework and submit jobs to the task manager, register themselves as callback listeners to various systems, etc.






PARTNERS