architecture for messages

Started by
24 comments, last by ddn3 15 years, 4 months ago
Quote:Original post by adtheice
Now i need to figure out how to sort and store messages so that i can do easy queries of them and collect all the messages of specific types. (like i want all attack messages or all messages relating to this id or etc etc). currently trying to figure out a fast and flexible way of doing this generally.
This sounds like a database. If you need the entire feature set of a database, consider a lightweight in-memory database, such as SQLite. Otherwise, you'll have to remove features.

Advertisement
Quote:Original post by Wyrframe
I think the message-bin idea (broadcast messages are sorted into like bins and delivered later) could work, and give good support for concurrency, under one simple assumption; many listeners can update independently, in separate threads, but a new update cycle doesn't start until everyone has finished the last update cycle.


How will you manage proper ordering of messages?

Let's say you have movement system which updates locations of moving entities, and physics system which manages collisions.

Each of these runs in parallel, yet they need to operate on same data with circular dependency on each other.

Granted, in this particular case you can merge both systems into one, but that doesn't solve the general problem.

This isn't even remotely a trivial problem.
I realize this is not a trivial problem, hence one of the reasons i was asking for help =P
Smile, it won't let you live longer, but it is funny when they die with a smile.
Quote:Original post by adtheice
I realize this is not a trivial problem, hence one of the reasons i was asking for help =P


Not messaging, multi-threading. The biggest challenge, one to which there is no final answer as of yet, is how to distribute the state and how to schedule work in such a way to utilize multiple cores as effectively as possible.

Adding generic multi-threading system just for the sake of it may not only degrade performance, but will definitely complicate things a lot.

The Smoke engine I linked is an example of one such attempt.

IMHO - there's no substitute for experience, and what worked for one might not have for others. Code both approaches, then see how they turn out. Prototyping shouldn't take much time, but stick to single-threaded approach. Messages are just different function calls, but basic idea is to change the way logic is handle (for decoupling, concurrency, load balancing, networking, ...).

You can even look into stackless python tutorial for example of how such logic can work.
Quote:
Not messaging, multi-threading. The biggest challenge, one to which there is no final answer as of yet, is how to distribute the state and how to schedule work in such a way to utilize multiple cores as effectively as possible.


last time i checked Erlang did multi-threading right.
Smile, it won't let you live longer, but it is funny when they die with a smile.
Erlang was written to fulfill a very specific design requirement of a particular set of applications. It performs well in that environment but even the creators of Erlang acknowledge it doesn't fit or even is a good choice for certain applications, games being one of those and other high performance applications.

You can write guaranteed multi-thread safe applications if you're willing to sacrifice alot of performance, just copy state and never share any resources and use asynchronous events handling for everything. Some languages, Erlang being one, uses that philosophy, but as you can see they do pay a heavy price for it. For some applications, this is the way to go, since u can scale up the number of cores and overcome the performance loss, eventually you'll reach a break even point and it becomes a gain, as you'll more than make up with increase stability and uptime.

For games, performance is a critical requirement, so doing all those copies, duplicate resources and async events just isn't feasible, if you want to be competitive.

Back to the original question about managing events. I use an on demand scheme of event processing, when events are put onto an empty queue, it registers a callback with an event scheduler which schedules the event queue to dispatch in some future time. Using such a scheme you can avoid having to use singleton or global event processors but still have a mechanism for controlling the dispatching and perhaps in your case querying of events.

Just route the callbacks to your query manager which keeps a set of active queues at any one time. Be sure to check for duplicate events as events can be on more than one queue.

Good Luck!

-ddn

This topic is closed to new replies.

Advertisement