I disagree! In my view, shared-state concurrency is very different from a cohesive distributed view. Being able to be divergent but eventually consistent on disparate nodes is a hugely important distinction that you don't have in MESI.
What I meant was the concurrency is essentially distribution across nodes with a relatively slow interconnect between them, just like a network. I don't understand how the two can be considered different. As I said before, all computer architecture is built on the idea of local nodes with relatively slow interconnects. Any use of the comm bus, be it between cores, NUMA sockets or cluster nodes should be minimized in order to increase scaling. The comm bus has two properties that hinder performance: low throughput and low latency. Eventual consistency tries to bypass the latency limitation. However, eventually consistent scaling sometimes sacrifice consistency for no good reason. They are architecturally lazy when it comes to object placement by employing a very simple scheme: usually a DHT. With such a bad strategy, sacrificing consistency gives latency benefits. But smarter object placement schemes can achieve the same (amortized, not worst case) latency while maintaining consistency.
which is why middleware is so damn popular in the games industry (contrary to your assertion)
Is it (I'm asking seriously)? It is certainly true on the client side with game engines, but which server-side frameworks are in common use in games?
On that note, with your space-ship example, it's not obvious what kind of ordering constraints are/can-be applied to the different processes. Are all beams fired before any damages are calculated, etc?
The demo was purposefully written to simulate asynchronous requests coming over the network, each being processed as it arrives with no global "ticks".
In the Actor Model implementation that I've used for games, these kinds of ordering issues were explicit to the programmer, allowing them to maintain a deterministic simulation, regardless of the number of cores or the scheduling algorithm used.
The same could be achieved with SpaceBase (though that was not the intention in the demo). You could collect all requests and then process them in parallel, deterministically. I'm not sure this property is always so important (or important at all), though. BTW, what actor implementation have you used?
most use something closer to a stream-processing or flow-based programming model, which is easy to reason about (you can write the main flow in a single-threaded style and have it be automatically decomposed)
Which frameworks are used for that?
I wouldn't agree with that. Multiple threads sharing the same RAM without restriction gives us the "shared state" model of concurrency. When we build a message passing framework, we build it directly on top of this model.
Of course. Software level messaging abstraction is implemented on top of a shared RAM abstraction, which is, in turn, implemented on top of hardware level messaging. The messages are different, but the point is that shared state is always an abstraction on top of messaging, and computer architecture is such that this cross-"node" messaging should be minimized. We can call it the "shared-state/message-passing duality". Sometimes it's best to work with one abstraction, and sometimes with the other. However, a shared-state framework can certainly improve performance in many cases, as it has more knowledge about the domain semantics to apply optimizations.