Hi all, I'm working client server networking model for my game built in Unity, but haven't found resolution to my issue.
Basically what happens is the server is authoritative and has a complete game state of all game objects. It runs all the logic and updates on the game objects. The clients also has its "own" copy of the game state, but it doesn't run any updates that the server does. It only listens to server messages, and then synchronizes the game state accordingly. So if the server happens to stop sending messages, the client game state would appear frozen because there are no updates.
I had this implemented and everything worked as expected, until I hit a performance problem on a old IPod Touch device. The issue is that when one of the players is the host, he pretty much has to allocate twice as many objects, 1 set for client processing, and the other for host processing. The server code still sends messages to the client and the client processes it the same exact way as remote clients even if the client exists on the same device. This turned out to be expensive performance wise on a old device, but it seems desirable because its a clean separation and ownership between client and server code in the same application.
So the question is, in a client sever model, is this how its commonly done professional games, like Halo? It seems like its a waste of memory to have twice as many objects, but yet it seems like its the clean way to do things. Granted, you don't need to allocate all the graphical stuff for the server side set of game objects, but there could still be significant resources.
My current workaround is when you are hosting a game, just allocate 1 set, and the client and server shares the game objects and updates it, but put in checks to not step on each other's toes. This workaround could actually just be the ideal solution, but I just need to design my code better around it.
Thoughts and feedback would be great appreciated. Thanks.