Jump to content

  • Log In with Google      Sign In   
  • Create Account


Component programming. I think?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
32 replies to this topic

#1 BinaryPhysics   Members   -  Reputation: 294

Like
1Likes
Like

Posted 05 March 2013 - 10:07 AM

Everytime I talk to someone about game programming they always say "just do it". So "just doing it I am". However, I was wondering if I could get a slight bit of feedback on my current idea. I'm going to keep pursuing it until I hit a brick wall for educational purposes but I was curious how common this kind of structure in a game is.

 

I was recently reading a book that brought up the concept of a 'system'. Systems have inputs, outputs, feedback mechanisms, etc. It occured to me that all game engines are just a series of systems (and the award for over-simplification of the year goes to...!).

 

My current structure simply involves all game parts (input manager, rendering engine, networking) inheriting from an interface ISystem. This makes all systems completely isolated from each other.

 

Systems can also contain subsystems. If the game itself is a system then it can contain a input manager system, a rendering system, and so on.

 

The way I've defined the interface means that systems can communicate via a message passing interface. Messages derived from an IMessage interface and carry their type with them so that certain systems can receive specific information. I've previously looked at DOOM-3's source in brief and after reading 'All Signs Point to "No-No"' here (http://www.gamasutra.com/view/feature/132500/dirty_coding_tricks.php?page=2) I figured this was a better idea than enforcing simple packing.

 

Systems keep an output-restricted deque to allow important messages to skip to the front of the message handling process.

 

This is my first real project and I was really wondering what someone with actual experience thought of this idea. Have I been paying attention to everything I've been reading in books/online or have I missed the point entire and should probably be shot.

 

Thank-you for any comments.



Sponsor:

#2 phil_t   Crossbones+   -  Reputation: 3222

Like
0Likes
Like

Posted 05 March 2013 - 01:00 PM

One issue with having messages derive from an IMessage interface is that you now have a black box object behind the message. This is nice from an OOP point of view, but can make things a bit more difficult if these messages need to be serialized (for instance as part of a save game). They're now backed by live objects (which also have object ownership requirements which can complicate the architecture). If a message is just a packet of data, then you can eliminate some tricky engineering problems. Just a minor note.

Overall it sounds like you're on the right track!

#3 frob   Moderators   -  Reputation: 19624

Like
2Likes
Like

Posted 05 March 2013 - 01:58 PM

The interface route is called the Dependency Inversion Principle.  It is part of the SOLID development principles.

 

It does take a little bit more work up front to set up, but if your system grows to any appreciable size the effort will be worthwhile.

 

When you write a system you program to an interface.  You send everything to the black-box interface.  The systems do not know and do not care what is behind the interface.

 

Then when you create objects, you can have them derive from a common base class and let them vary their behavior as needed.  

 

It prevents many code smells, such as a system that is hard-coded to child classes and therefore is brittle and difficult to extend.


Check out my personal indie blog at bryanwagstaff.com.

#4 Bacterius   Crossbones+   -  Reputation: 8277

Like
1Likes
Like

Posted 05 March 2013 - 04:05 PM

The interface route is called the Dependency Inversion Principle.  It is part of the SOLID development principles.

 

It does take a little bit more work up front to set up, but if your system grows to any appreciable size the effort will be worthwhile.

 

When you write a system you program to an interface.  You send everything to the black-box interface.  The systems do not know and do not care what is behind the interface.

 

Then when you create objects, you can have them derive from a common base class and let them vary their behavior as needed.  

 

It prevents many code smells, such as a system that is hard-coded to child classes and therefore is brittle and difficult to extend.

 

This is exactly what I'm using for my current project, and it works great. It's very important to keep the base interface as lightweight as possible (just include what is needed for the architecture to work), so that you can comfortably derive any behaviour from it and transparently expose its important features through the interface.

 

That said, don't try to shoehorn everything under a common interface. If there are different objects that are very tightly coupled for some reason, either redesign your game, or combine these in a single object. Don't try to find patterns where there are none, overengineering is almost as bad as no engineering.


Edited by Bacterius, 05 March 2013 - 04:07 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#5 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 05 March 2013 - 05:58 PM

I'm a little confuse about what is being suggested. I've done a small amount of Googling (I'm going to do a lot more research on SOLID because it looks like a powerful method of segregation).

 

My original idea was simply creating system types and have them as data members. From what I've read is it better to simply have a series of pure virtual functions as an interface (say IInputManger, or IRenderer) and contain only pointers to such types? I'm assuming the actual systems derived from these interfaces (RenderingEngine inherits from IRenderer) but have no accessible functions (other than those defined in the interface) because they're only ever accessed through pointers these interfaces?

 

This makes all interfaces simply adaptors (http://en.wikipedia.org/wiki/Adapter_pattern)? Isn't this the same kind of thing COM attempts to do?



#6 Eidetic Ex   Members   -  Reputation: 133

Like
1Likes
Like

Posted 05 March 2013 - 06:54 PM

If you plan to roll with a component based system, be sure to read up the chapter of Game Programming Gems on the subject. It presents a really simple way to handle components and some great guidelines to use when developing your own component system.

 

However in terms of message type systems. I found the Observer-Subject implementation in Intel's Smoke demo to be really interesting. In short subjects can publish changes, observers recieve notification of the changes. A change manager sits between them queueing up changes until a function call is made to begin distributing changes. The change manager also manages the associations between observer and subject based on bitmask of interest they have in common. It's an effective system that forces you to keep your code bundled up nicely and easily seperable.



#7 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 06 March 2013 - 01:12 AM

As mentioned above, I suppose "Component programming" refers to the Entity-Component-System design pattern, but I don't think this is really what the OP is into. But, it may be that the ECS is the answer to the question? Using inheritance and virtual functions is a powerful method, but it can get too complex if there are a lot of objects and behaviors. ECS uses composition instead of inheritance.

 

Notice that ECS can be perfectly well combined with the Observer pattern. That will help you decouple dependencies between systems. High volume data, like the rendering process, probably still use polling (where the render system polls the physical models), but non regular state low frequency changes can use an event system. The definition of "low frequency" of course varies.

 

I also would like to mention the Model-View-Controller, which can also effectively be combined with these patterns. This can help you organize the various systems into three main groups with a more-or-less clear dividing line.


Edited by larspensjo, 06 March 2013 - 01:22 AM.

Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#8 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 06 March 2013 - 05:28 AM

But why should systems communicate at all? An entity system can tie them together, and if the entities are built from components, very little code needs to talk to more than one system.

In terms of patterns, the component is a facade, providing a simplified interface to the system behind it. Wrapping the system in a adapter as well seems unnecessary.

#9 wintertime   Members   -  Reputation: 1640

Like
0Likes
Like

Posted 06 March 2013 - 08:48 AM

Yeah I would be wary of shoehorning 3 patterns into one thing. Especially the observer pattern seems to me like creating an entangled mess with looping back and forth messaging with at least the squared complexity of a traditional procedural spaghetti code, if any single thing in your program observes something and is observed itself by anything.

I would just try to follow the SOLID principles, never create any dependency cycles and just maybe use a single pattern if you are 100% sure it is truely helpful for the one problem at hand.



#10 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 06 March 2013 - 03:29 PM

But why should systems communicate at all? An entity system can tie them together, and if the entities are built from components, very little code needs to talk to more than one system.

 

The use of Entity Component Systems will minimize the need to communicate between the systems, that is very good. But there are cases that are not so certain how they should be managed.

 

Take an example: A system that detects collisions between arrows and monsters. This system would iterate through entities with certain components, and compare position and size to find out if there is a collision. It may use a octree of make it efficient, but that is independent of the discussion.

 

For every collision detected, there may be a couple of things that should happen:

  • A sound effect is generated.
  • The monster that is hit plays an animation.
  • There is some other visual result, e.g. blood on the ground.
  • A monster has hit points adjusted from damage. If it dies, another list of effects will follow.
  • The arrow entity is removed.
  • The monster becomes aggressive, or starts to flee, or something else.
  • A message is generated in the chat window that tells the player in detail what happened ("You damage the orc with 5 hp").

Some of these may be managed by systems, independent of each other. The system that manages collisions shouldn't need to know about sound effects, graphical effects, etc. The simple solution would be that the collision system creates a collision event that consists of the two involved entities. There are a couple of observers of this event, but the collision system need to know which ones. The observers can be other (ECS) systems, but doesn't have to be.

 

To some extent, the same effect could be managed using components, but it can quickly turn complicated. If you add a "damage" component to the monster entity, it would solve some of the problems. But what happens if two arrows hit the same monster? Some ECS implementations only allow for one component of each type.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#11 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 06 March 2013 - 03:51 PM

Yeah I would be wary of shoehorning 3 patterns into one thing.

I admit that there are a disadvantage with many patterns. They add some initial complexity and effort. This pays off when the application grows beyond some level of complexity. But if the application is small and simple enough, you need no patterns at all. Some patterns also takes effort to learn and understand. It is sometimes a good experience to implement something in the wrong way, to better appreciate the purpose of a pattern.

 

I am not sure what you mean by shoehorning. The ECS pattern and the observer pattern are orthogonal to each other. They are independent, and can each be implemented without disturbing the design of the other. So is the Model-View-Controller.

Especially the observer pattern seems to me like creating an entangled mess with looping back and forth messaging with at least the squared complexity of a traditional procedural spaghetti code, if any single thing in your program observes something and is observed itself by anything.

Events usually do not generate loops. Why would they? In the example I provided above, an event trigger a sound. The sound would not trigger another collision event. The message in the chat window does not generate another collision. Maybe I misunderstand, please explain with an example.

I would just try to follow the SOLID principles, never create any dependency cycles...

I absolutely agree. Using the SOLID principle, the collision system does not take responsibility for anything else but detecting collisions.

 and just maybe use a single pattern if you are 100% sure it is truely helpful for the one problem at hand.

Again, I agree. But we don't know exactly what the requirements are of the OP. That is why I suggested some patterns that may help.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#12 dmatter   Crossbones+   -  Reputation: 3000

Like
0Likes
Like

Posted 06 March 2013 - 04:15 PM

Take an example: A system that detects collisions between arrows and monsters. This system would iterate through entities with certain components, and compare position and size to find out if there is a collision. It may use a octree of make it efficient, but that is independent of the discussion.
 
For every collision detected, there may be a couple of things that should happen:

  • A sound effect is generated.
  • The monster that is hit plays an animation.
  • There is some other visual result, e.g. blood on the ground.
  • A monster has hit points adjusted from damage. If it dies, another list of effects will follow.
  • The arrow entity is removed.
  • The monster becomes aggressive, or starts to flee, or something else.
  • A message is generated in the chat window that tells the player in detail what happened ("You damage the orc with 5 hp").


Not having ever implemented an Entity Component System I am curious to know how (or if!) an ECS would typically solve this kind of problem? I'm not sure whether this is on-topic or not, mind you.

Assuming some mechanism for sending/receiving messages (observer pattern, event queue or DIP framework to allow components to acquire direct references to simply invoke methods on) then I might speculate the following arrangement:

1) Both the arrow and monster have a Collidable component to participate in the collection system.
2) The collision system identifies a collision between an arrow and monster and sends a HasCollided message to each.
3) Arrow has a Projectile component which receives this message and sends the other object (a monster) a Hit message. This component also removes the arrow entity.
4) The Monster has an Damageable component which was parameterised at construction with a sound-effect, particle-system and logger.
5) The Damageable component receives the Hit message sent by the Projectile component (or it could listen for the HasCollided message and determine a 'hit' for itself, but that seems like a duplication of effort).
6) The Damageable component sends messages to spawn the particle system and the sound effect. It uses the logger to log what happened (it needn't care that the ILogger was a ChatWindowLogger specifically).
7) The monster has a MonsterBehaviour component which also listened for the Hit message (or perhaps an IsDamaged message sent by the Damageable component), when hit this component will shift the behavioural finite state machine into an aggressive/flee/something-else state.

How does that sound?

Edited by dmatter, 06 March 2013 - 04:17 PM.


#13 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 06 March 2013 - 06:29 PM

Yes, pretty much like that. The only thing I have to add is that two arrows hitting the same monster means two hit messages; no problem.

#14 Zipster   Crossbones+   -  Reputation: 579

Like
0Likes
Like

Posted 06 March 2013 - 07:25 PM

Not having ever implemented an Entity Component System I am curious to know how (or if!) an ECS would typically solve this kind of problem? I'm not sure whether this is on-topic or not, mind you.

Assuming some mechanism for sending/receiving messages (observer pattern, event queue or DIP framework to allow components to acquire direct references to simply invoke methods on) then I might speculate the following arrangement:

1) Both the arrow and monster have a Collidable component to participate in the collection system.
2) The collision system identifies a collision between an arrow and monster and sends a HasCollided message to each.
3) Arrow has a Projectile component which receives this message and sends the other object (a monster) a Hit message. This component also removes the arrow entity.
4) The Monster has an Damageable component which was parameterised at construction with a sound-effect, particle-system and logger.
5) The Damageable component receives the Hit message sent by the Projectile component (or it could listen for the HasCollided message and determine a 'hit' for itself, but that seems like a duplication of effort).
6) The Damageable component sends messages to spawn the particle system and the sound effect. It uses the logger to log what happened (it needn't care that the ILogger was a ChatWindowLogger specifically).
7) The monster has a MonsterBehaviour component which also listened for the Hit message (or perhaps an IsDamaged message sent by the Damageable component), when hit this component will shift the behavioural finite state machine into an aggressive/flee/something-else state.

How does that sound?

I was always under the impression that in an ECS, components don't have any logic, so they can't be actively sending messages to one another. Rather, once the Collidable component has its 'hitFlag' set by the collision system, the damage resolver (another system) comes by, sees that the flag is set, grabs the objects involved, determines who is dealing damage to whom, and updates the Damageable component. Then your damage effect system comes along, sees that damage has been dealt to an object, and spawns some particle effects and sounds. The components here are extremely passive, and really only serve to store state that is evaluated by systems.



#15 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 07 March 2013 - 02:21 AM

I was always under the impression that in an ECS, components don't have any logic, so they can't be actively sending messages to one another.

Yes, that is the way I also understand it. The logic is managed by the systems, while the components only contains data.

Rather, once the Collidable component has its 'hitFlag' set by the collision system, the damage resolver (another system) comes by, sees that the flag is set, grabs the objects involved, determines who is dealing damage to whom, and updates the Damageable component. Then your damage effect system comes along, sees that damage has been dealt to an object, and spawns some particle effects and sounds. The components here are extremely passive, and really only serve to store state that is evaluated by systems.

This is where it starts to get interesting. How do the systems communicate between each others? Can it all be done through components, or is the observer pattern needed? Of course, there is no single answer that fits all applications. I can imagine that there are situations where using only components will suffice. You say that the damage system spawns some particle effects and sound. This should be avoided, as you are now no longer SOLID (damage system should only take responsibility of damages, not graphical effects and sound).

 

Using components to provide messages means temporary components are needed. They would be added by one system, and removed by another system. It can also be done by using flags in persistent components.

 

Now take the example to the next level, and introduce an achievement system to the game. Suppose you can get an achievement point from "hitting 5 different monsters with arrows in less than 10 seconds". How would the application be extended to support such a change?

 

Using the observer pattern, it would be trivial. The achievement system simply subscribes to the event generated by the collision system, and then use a timer and a counter. The beauty here is that it can be added with no effect at all to the other source code.

 

Using components for communication to the achievement system, there is a risk that the current component design does not provide enough information. If so, more transient components need to be defined, or current components need to be extended.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#16 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 07 March 2013 - 05:11 AM

For the record I'm attempting to write/rewrite the Ocarina of Time on the PC. I don't plan to release it or do anything with it I just wanted something that I could look at and think about how things are done rather than what is being done.

 

I'm studying Software Engineering at Uni and what I really wanted was some kind of over view about how a game/engine might be engineered architecturally rather than exactly how Direct3D works or how basic physics is calculated (I already know the basic maths I can implement this later).

 

These posts have introduced me to several ideas I didn't know existed so educationally this is exactly what I wanted.

 

In terms of game objects I was just going to have a giant internal table of IEntity objects (cameras, players, enemies, static boxes, etc). Such an interface really only stores absolute position in the game world and the Euler angles. So far I hadn't thought much further than that.



#17 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 07 March 2013 - 06:01 AM

Components having logic vs systems having logic:

Isn't it the same thing? Take a component with logic, make all the functions static, add an ID parameter, and move the data into a struct.

Then the entity becomes either a container of structs, or if the systems hold the data, just an ID. You still use it the same way, passing messages with a target entity which modify the component data and spawn other messages or interact with lower-level systems that do the heavy lifting. You can't do everything by setting flags and waiting for the update, that sounds like a nightmare.

#18 wintertime   Members   -  Reputation: 1640

Like
0Likes
Like

Posted 07 March 2013 - 06:38 AM

Events usually do not generate loops. Why would they? In the example I provided above, an event trigger a sound. The sound would not trigger another collision event. The message in the chat window does not generate another collision. Maybe I misunderstand, please explain with an example.

It was more in a general view of overusage of the observer pattern, as some time ago I read people had problems with having to use some GUI library where GUI widgets were producing an exponential avalanche of events in a case that goes like this:

- User clicks on a widget and it decides it needs to change its size, so it fires a size changed event.

- Some neighboring objects are registered as observers to this, get called and adapt their own size to this.

- This results in those other objects generating new events to announce their change.

- These events bounce to even more objects and possibly also the original object again.

- And so it goes on and on till it hopefully stabilizes...

 

I could imagine this might also happen in a poorly implemented collision system if both objects announce their collision, then all neighbors in a certain range observe this, recalculate if they collided with that object and then possibly gererate follow up events.

 

 

 

I also think its too unpredictable if any object can attach itself to any other for observing it and that possibly even differently depending on input.

For examle if I wanted to structure a program in an MVC way, I would prefer the higher level controller object just asks directly(not through observing) for input(preferably filtered to only a bit of meaningful highlevel information from the view which never directly changes the model), then orders the lower level controller objects to do something(and these maybe order even lower level objects to do part of the work and so on down to the lowest level of the model) and these just directly return something which indicates their action(and not do general observing), then the higher level object should know if something else needs to be done depending on this. Then the controller tells the view to draw itself and either provides all data or the view asks the model for the specific data it needs at that moment(and does not observe it constantly to update its internal redundant copy of the data possibly multiple times where it possibly only needs some of it) and draws itself and tells all child objects to draw themself in the leftover space(after possibly gathering needed data).

Ultimately this results in a nice predictable tree where nothing can go directly sideways or upwards randomly cause of observing.



#19 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 07 March 2013 - 06:45 AM

These posts have introduced me to several ideas I didn't know existed so educationally this is exactly what I wanted.

 

Good, I was afraid the discussion went out of scope too far.

 

In terms of game objects I was just going to have a giant internal table of IEntity objects (cameras, players, enemies, static boxes, etc). Such an interface really only stores absolute position in the game world and the Euler angles. So far I hadn't thought much further than that.

 

The ECS pattern also uses a single table of entities. The idea then is that each such entity only consists a unique id and a container of components, nothing more. A typical component you would need for almost all entities is a "position".

 

I don't say you have to go the ECS way, just proposing it as a possibility.

 

 

Components having logic vs systems having logic:

Isn't it the same thing? Take a component with logic, make all the functions static, add an ID parameter, and move the data into a struct.

Then the entity becomes either a container of structs, or if the systems hold the data, just an ID. You still use it the same way, passing messages with a target entity which modify the component data and spawn other messages or interact with lower-level systems that do the heavy lifting. You can't do everything by setting flags and waiting for the update, that sounds like a nightmare.

 

There are some important differences, which is also the idea of the ECS pattern. Having a component as a struct with logic (member functions) is a standard OO way of programming. But in ECS, these structs only contain data, while the Systems have the logic. This may at first look like a step backward in time, when all logic and data was separated from each other.

 

Please have a look at the excellent Role of systems in entity systems architecture. It explains it much better than I do. In principle, the combination of components attached to an Entity is used to unlock various Systems. Only those systems keyed for this exact combinations of components will be activated. And there can be any amount of them.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#20 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 07 March 2013 - 09:49 AM

Please have a look at the excellent Role of systems in entity systems architecture. It explains it much better than I do. In principle, the combination of components attached to an Entity is used to unlock various Systems. Only those systems keyed for this exact combinations of components will be activated. And there can be any amount of them.


I don't want to fall into the trap of finding all sorts of problems because I've never tried it, but that looks like a giant blackboard with no encapsulation. If I want a blackboard, I'll write one that's more generic and exposes no more data than necessary. Anything else I haven't considered?

I like the observer idea though, and I think that would fit equally well with components.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS