Keeping Modules Seperated

Started by
3 comments, last by Angelic Ice 7 years, 5 months ago

Hello forum!

I have following modules: Audio, Window/Graphics (renderer), (Keyboard-)Inputs and the current Level.

Audio and Window are a direct part of the engine class. The engine class is the first class that gets created and simply owns game states, audio- and graphics-module.

The level state is active once the player actually decided to play. As a result, the level-module is part of this state.

Within the level-module resides the actual collection of entities and their components.

Now, when the level state is being iterated over, the level module iterates over all graphics components of each entity (e.g. player or enemy). The graphics component have to use quite some getters to finally obtain the graphics module and force it draw them.

Same for the audio-module.

While the combination of getters convey "ah, this the dependency-chain", it creates the issue, that modules need to know about the existence of other modules.

Is it wise to switch this to a more centralised approach?

If yes, what design pattern would suit here to avoid that?

My idea: Using a messenger module. All other modules only need to know the messenger. An audio component uses getters to return the messenger module and creates a to-do-task.

E.g. "audio", "play sound" and "audio proxy".

Once all components are done, the upper modules start to iterate over their very own message board array/list/... and simply do what is needed.

While I can see the audio module removing tasks, the renderer module cannot.

Every graphics component would add x-coordinate, y-coordinate and the texture-proxy/vertex array.

This is being executed by the graphics module every frame. Once a graphics component wants to change their data, they can simply update their given task in the message board.

Drawback: It increases the amount of allocated memory. Which is not that of a huge amount, as we are talking about strings and pointers.

I know there is the KISS-principle - but I thought it might be a wonderful way of learning how to tackle this dependency-"issue".

Thanks for taking your time to read my thread, I'm really curious on what wonderful solution exist for this case : )

Advertisement

In general terms it is good to have an "event bus", or a registered listener pattern, where various components can listen in to a well-known instance of a messenger, and all the interested objects can get an event when the thing happens they are interested in.

Just don't make it a singleton, as other systems may create their own instance of a private event bus for their own private communications.

As for your specific dependencies, the graphics subsystem usually maintains a graphics scene hierarchy. But that is probably -- almost certainly -- different from the level's object hierarchy. The level may have objects without a graphics object, or may have objects with multiple renderable items, or may have the ability to switch between multiple models. The level does not draw the scene, instead the graphics subsystem draws a scene graph. Often the scene graph is shuffled around to reduce issues like texture changes and shader changes, drawing items with the same materials and shaders all at once, preferably with a single render call.

Oh, so an event bus is similar to my approach?

How would such an event bus work?

The engine class creates an event_bus, the audio/input module hooks into this one and whenever an audio component of an entity wants to play a sound, it tells the bus, an event of the category "audio" happened?

Once an event has happened, the bus looks for listeners to that kind of event (maybe a string differentiates?) and notifies that listener with arguments given from the happened event?

About the scene, I have arrays pointing to the drawable components. Is that a sort of a scene?

Basically yes, that is one way to do it. For positional audio you might pass a reference to the object that made the noise. The system can decide what to do with the event, which may include writing to logs, triggering closed captions, and even playing an audio clip. In general this is my preferred way to handle this type of communications between subsystems.

This avoids most of the problems about having a systems needing to talk to each other. There are times when more direct methods are needed, but message systems are usually extremely fast since each event may only have one or two event listeners.

A quick Google search for "Event Bus C#" or "Event Bus C++" finds plenty of examples.

Thanks for explaining!

About the graphics scene, should it behave similar?
Like, I create my objects from a Lua file. Once an entity gets created owning a graphics component, a pointers to that component is being created (the collection of those pointers is a vector).
Should the scene listen to a messenger as well, waiting for instantiations of such components?
Not sure if changing textures are a problem, when I use proxies to textures.
I imagine a scene to be a collection of drawable objects with their coordinates and such. The scene would take those pointers and use them once the drawing starts.
Since they point to an object that points to a proxy, it should be no issue.
Unless a scene is more of a done constellation of sprites, like a finished canvas - has been drawn already but not displayed.

Or is the visitor design sufficient?

This topic is closed to new replies.

Advertisement