I''m deep in to coding the netowrking infrastructute for my lockstep based messaging engine. Unlike Drone''s example code however I went for UDP.
So far we have for example 1 message every say 100ms. If the player generates no NEW input do you think I should attempt to ''assume'' that their new move is the same as their last one? I mean if a player is running forward for 10 seconds thats 100 packets that say "player is running forward''
I was just thinking that if I conceptually linked repeditive moves to the predictive system and made assumptions based on that couldn''t I just neglect sending the packets and ''just let the prediction do it''s thing''.
I know initially it seems dangerious as you can''t tell the difference between packets not sent and packets dropped on a rounter...
What are your thoughts on this? The best I''ve come up with is to send a special packet that say''s "no change" but still, your sending data...
The only reason I''m giving this thought is that humans are SLOW and sampling quickly is good for responsivness but in general isn''t needed.
Is there wany way to use UDP and not have a heartbeat?
I was on to something but missed the point. Any turn based game HAS to recieved validation from every client that the last move was received by the the other clients, in this case the server.
If a shooting packet was dropped and the other clients missed it all hell would break loose.
I still think there is something to be gained but perhaps only occationally when the player really is doing the same move over and over. All is needed then is a special message sent to say "oh by the way those missing X packets were all move forwards''s as you predicted"
The number of missing packets directly preceeding this one needs to be noted so the server can check off the one it thinks it should have recieved.
Obviously this technique needs limits so a person who goes to take a pee-pee during play doesn''t get dropped as a lagged connection. All we need to do is set a limit and make sure the client hearbeats well within the timeout.