Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualTonyyyyyyy

Posted 21 November 2012 - 02:06 AM

This may be similar to a problem that I had encountered. I noticed that when remote testing over the internet, local clients were working correctly, however the remote players were experiencing large prediction errors. It became evident that a factor was affecting the simulation. For me, I wasn't easily able to "rewind" physics simulations. For example, prior to becoming aware of the bug, I was simulating inputs in real time as they were received. The problem here is that the upstream latency will shift the inputs on server and client. The client stores them immediately as the current tick, whereas the server may receive and store them several later, resulting in a discrepancy. For me, as I don't have full exposure of the physics system, fixing this involves forward predicting when the inputs will be received by the server, on the client. So that If the upstream latency is 5 game ticks, the client predicts the results of those inputs and stores them on the current tick + latency (5 ticks). For you, as you're working in C++, one can presume that you've more control over the physics steps, you'd send the intended tick number with the input packet, and the server would "rewind" to that state, and simulate for the time step. This would account for any latency.


Sorry for this late reply!
It mostly works now, except I get some floating-point issues. In order to make my server frame-rate agnostic, the client sends the "simulation time", which is basically the time that the client used to multiply the movement velocity (V*t) to the server. The server then subtracts 0.166667 from this value each frame (the server runs at 60 FPS) or the remaining value if it is less that 0.166667. Unfortunately, this causes some small floating point precision errors. Add in packet drops (if packet #2 was dropped, when I receive packet #3 i just process #2 with the information from packet #1 and then process packet #3), and the client will slowly become inaccurate.

Is there something I'm doing wrong here? Is it necessary for the client to also send its predicted position, and have the server decide whether the error is ok, and then reset its position to the client's position? Or should the server just be the absolute decider?

Each of my input packets is currently 10 bytes and 5 bits, and adding the predicted position would cause it to be 22 bytes and 5 bits, effectively doubling the original size.
Just for movement, I'd be sending out 5.4kbits/second.

Thanks!

#2Tonyyyyyyy

Posted 21 November 2012 - 02:03 AM

This may be similar to a problem that I had encountered. I noticed that when remote testing over the internet, local clients were working correctly, however the remote players were experiencing large prediction errors. It became evident that a factor was affecting the simulation. For me, I wasn't easily able to "rewind" physics simulations. For example, prior to becoming aware of the bug, I was simulating inputs in real time as they were received. The problem here is that the upstream latency will shift the inputs on server and client. The client stores them immediately as the current tick, whereas the server may receive and store them several later, resulting in a discrepancy. For me, as I don't have full exposure of the physics system, fixing this involves forward predicting when the inputs will be received by the server, on the client. So that If the upstream latency is 5 game ticks, the client predicts the results of those inputs and stores them on the current tick + latency (5 ticks). For you, as you're working in C++, one can presume that you've more control over the physics steps, you'd send the intended tick number with the input packet, and the server would "rewind" to that state, and simulate for the time step. This would account for any latency.


Sorry for this late reply!
It mostly works now, except I get some floating-point issues. In order to make my server frame-rate agnostic, the client sends the "simulation time", which is basically the time that the client used to multiply the movement velocity (V*t) to the server. The server then subtracts 0.166667 from this value each frame (the server runs at 60 FPS) or the remaining value if it is less that 0.166667. Unfortunately, this causes some small floating point precision errors. Add in packet drops (if packet #2 was dropped, when I receive packet #3 i just process #2 with the information from packet #1 and then process packet #3), and the client will slowly become inaccurate.

Is there something I'm doing wrong here? Is it necessary for the client to also send its predicted position, and have the server decide whether the error is ok, and then reset its position to the client's position? Or should the server just be the absolute decider?

Each of my input packets is currently 10 bytes and 5 bits, and adding the predicted position would cause it to be 22 bytes and 5 bits, effectively doubling the original size.
Just for movement, I'd be sending out 5.4kbits/second. I already drop a packet every couple of seconds or so, so doubling the throughput would probably cause more problems.

Thanks!

#1Tonyyyyyyy

Posted 20 November 2012 - 04:59 PM

This may be similar to a problem that I had encountered. I noticed that when remote testing over the internet, local clients were working correctly, however the remote players were experiencing large prediction errors. It became evident that a factor was affecting the simulation. For me, I wasn't easily able to "rewind" physics simulations. For example, prior to becoming aware of the bug, I was simulating inputs in real time as they were received. The problem here is that the upstream latency will shift the inputs on server and client. The client stores them immediately as the current tick, whereas the server may receive and store them several later, resulting in a discrepancy. For me, as I don't have full exposure of the physics system, fixing this involves forward predicting when the inputs will be received by the server, on the client. So that If the upstream latency is 5 game ticks, the client predicts the results of those inputs and stores them on the current tick + latency (5 ticks). For you, as you're working in C++, one can presume that you've more control over the physics steps, you'd send the intended tick number with the input packet, and the server would "rewind" to that state, and simulate for the time step. This would account for any latency.


Sorry for this late reply!
It mostly works now, except I get some floating-point issues. In order to make my server frame-rate agnostic, the client sends the "simulation time", which is basically the time that the client used to multiply the movement velocity (V*t) to the server. The server then subtracts 0.166667 from this value each frame (the server runs at 60 FPS) or the remaining value if it is less that 0.166667. Unfortunately, this causes some small floating point precision errors. Add in packet drops (if packet #2 was dropped, when I receive packet #3 i just process #2 with the information from packet #1 and then process packet #3), and the client will slowly become inaccurate.

Is there something I'm doing wrong here? Is it necessary for the client to also send its predicted position, and have the server decide whether the error is ok, and then reset its position to the client's position? Or should the server just be the absolute decider?

Thanks!

PARTNERS