Component based game and component communication

Started by
31 comments, last by Scourage 10 years, 2 months ago

I think about a communication implementation like this: Every system has an updated queue, which can be called from other systems, over the entity manager. That means, the input manager has it's cache of keyboard presses, and add a message to the queue in the destination system (if w key is pressed, camera system gets and entrry in the queue like "MOVE_FORWARD")

How can I implement this system, should the message be a struct, or a hash or whatever?

This system is async, is there a need for a sync messaging system also? That means should I implement a general messenger which could work async and synchron?

Advertisement

I haven't read all posts in this thread and hence may missing something. WHen you write "the input manager … add a message to the queue in the destination system" it sounds like a direct coupling. Maybe you just told not enough details. However, here are my thoughts:

1. The w key does not mean "move camera forward" during the entire runtime of the game application. Think of menu state, pause state, input customization.

2. The input sub-system should not need to know about every other sub-system which may receive user input. In general, direct coupling should be as less as possible (with maintainable effort).

(In the following I use the wording suitable for my own engine; your wording will differ, but the principle should become clear): An entity has a Placement component, a Vision component, an Audition component, and a CameraController component. The Placement component defines that and how the entity is placed in the world; when the entity is instantiated, the Placement component is part of the SpatialServices sub-system. The Vision component adds a field of view, near and far clipping, projection and similar, camera specific stuff. Together with the Placement a full camera set-up (e.g. viewing frustum) is given. The Audition component does a similar thing, but for audio rendering. Both the Vision and the Audition components become part of the RenderingServices sub-system.

The really interesting part is the CameraController component. Generally a Controller is responsible for interpreting input and to alter the entity it belongs to in a specific way. Let the CameraController be an implementation specific to control cameras. It does so by altering the Placement component directly (a CharacterController would instead alter parameters which in turn affect animations, and the animations would affect the Placement in the end). The customization of a Controller is not important here; let us assume that it is configured so that "state of key w is active => move placement forward". A CameraController becomes part of the InputServices sub-system and also as modifier of the SpatialServices. The InputServices knows of input configurations which are used to switch input handling depending on the game state. The said CameraController is attached to the "gameplay" game state. All Controllers attached to the active state are run during the update of the InputServices.

The InputServices::update() first reads in what the OS and configured input device drivers provide. The InputServices not just caches key states; it holds a history of all necessary input state changes inclusive timestamps. This allows input combos to be determined, too. However, after input is read and the game state's own input processing is run, the Controller instances are run on the current input history. Whenever a controller determines input to be relevant, it changes its internal state accordingly. In case of the CameraController which, as mentioned, is also part of the SpatialServices, the later happening update of the SpatialServices will move the Placement of the camera then.

In the above description the raw input "key w is pressed" (determined by the input section of the CameraController) is translated into a "move camera forward" command (again by the input section of the CameraController); the command is stored in the modifier section of the CameraController, and it is executed by the modifier section during the update of the SpatialServices. (BTW: The input section and the modifier section of the CameraController component need not be parts of a single object.)

The point is that although it seems so that the InputServices feed the SpatialServices, in fact components plugged into the InputServices "communicate" with components plugged into the SpatialServices. The coupling comes due to the nature of components and happens during instantiation of entities, but not due to the nature of sub-systems.

In such a solution the modifier is explicit and the command is implicit. Of course, you can use a command queue for the SpatialServices instead. In such a case the commands may have a common header but its body may be a blob. I use this kind of solution for graphical rendering, where GraphicJob instances are such blobs which are passed from the middle rendering layer to the low level rendering layer.

In my game engine, I built my entities as generic objects composed of attributes (data) and behaviors (logic). My game engine is also event based, so whenever an attribute changes an event is generated and sent into the event queue. Behaviors are able to register for event notification and take some action whenever an interesting event arrives. The behaviors are also able to get access to the attributes of the entity for direct reading/writing of them depending on what they do. The physics behavior runs the physics model and updates the position based on the outcome. The render behavior reads the position and updates the render scene object with the new position during its update pass.

The game engine update phase is done in multiple passes (as described in Jason Gregory's book "Game Engine Architecture"). Since some components depend on other components, they need to be executed in a certain order. Behaviors register interest in a given pass and are run during that pass. Behaviors can register for multiple passes if they want (the animation behavior does this mostly for ragdoll physics updating).

Coupling between behaviors is done either through a shared attribute or through event passing. I had one case where I thought that two behaviors needed to share some internal state data and ended up allowing one behavior to get access to the other behavior and directly access the data, but ended up just pushing the shared state to an attribute.

Cheers,

Bob


[size="3"]Halfway down the trail to Hell...

This topic is closed to new replies.

Advertisement