I'm sure this has a name but I'm not entirely sure what it is:
- Server and client(s) run the same deterministic simulation
- Clients send input to server
- Server aggregates input from all clients over period of time
- Simulation at server [and after propagation, clients] integrates time/input.
The reasons I've chosen this model are:
- To reduce bandwidth requirements and thus also latency (the game involves alot of densely packed units)
- It's easier to extend and frankly,more elegant than a dumb-client model (assuming the determinism quality can be assured)
- Enables a more thorough anti-cheating system
There are however three main issues I foresee which I'd like some input on:
Firstly, as the simulation client-side will only advance upon receiving a 'frame' of input it will be highly sensitive to network jitter. I believe this could be solved by using the usual latency compensation/hiding techniques.
Secondly, the entire model is predicated on what is essentially a perfect information game thus enabling unscrupulous players to gain information otherwise hidden. Obviously this is not unique to this model but it's exagerated.
Lastly, filtering client input in respect to the relevance of that input to another client (e.g, proximity) would be desireable (for both bandwidth and security) however it's clearly going to raise problems with synchronization. Possibly partitioning of the world-space and have clients only run the simulation for those sectors which are within proximity would be a solution.