Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualTonyyyyyyy

Posted 22 November 2012 - 10:01 PM

EDIT: Issue solved! Thanks for all the help!

#2Tonyyyyyyy

Posted 22 November 2012 - 03:45 PM

My problem is this:
The client samples input every 5 frames (for testing)
Every frame, the client uses the last sampled input to move the player smoothly (position += VELOCITY * (int)elapsedSeconds; totalElapsed += (int)(elapsedSeconds * 1000f);)
Client sends "simulation time" to server (in the case of 5 frames, 192ms)
Server receives the time and sets "client_sim_time" to that value
Every server tick, server deducts min(client_sim_time, 1f / TICKRATE) from client_sim_time

It is on that last step that the floating precision error occurs.

The following will print out "false":
		static void Main(string[] args)
		{
			float sum = 0.192f;
			float sub_sum = 0f;
			float frames = sum / (1f / 60f);
			float multiplied = 0.192f * 5;
			while (sum > 0)
			{
				float elapsed = Math.Min(1f / 66f, sum);
				Console.WriteLine(sum -= elapsed);
				sub_sum += elapsed * 5;
			}
			Console.WriteLine();
			Console.WriteLine(sub_sum); // 0.9599999
			Console.WriteLine(multiplied); // 0.96
			Console.WriteLine(sub_sum == multiplied);
			Console.ReadKey();
		}

Output:
Posted Image
(note how the numbers are not exact)

I could use the decimal data structure, but that would be suboptimal, as it has some overhead.
I tried rounding before, and it worked, but then the player simulations would be running slower than the server time, which I don't really want to deal with.

#1Tonyyyyyyy

Posted 22 November 2012 - 03:44 PM

My problem is this:
The client samples input every 5 frames (for testing)
Every frame, the client uses the last sampled input to move the player smoothly (position += VELOCITY * (int)elapsedSeconds; totalElapsed += (int)(elapsedSeconds * 1000f);)
Client sends "simulation time" to server (in the case of 5 frames, 192ms)
Server receives the time and sets "client_sim_time" to that value
Every server tick, server deducts min(client_sim_time, 1f / TICKRATE) from client_sim_time

It is on that last step that the floating precision error occurs.

The following will print out "false":
	    static void Main(string[] args)
	    {
		    float sum = 0.192f;
		    float sub_sum = 0f;
		    float frames = sum / (1f / 60f);
		    float multiplied = 0.192f * 5;
		    while (sum > 0)
		    {
			    float elapsed = Math.Min(1f / 66f, sum);
			    Console.WriteLine(sum -= elapsed);
			    sub_sum += elapsed * 5;
		    }
		    Console.WriteLine();
		    Console.WriteLine(sub_sum); // 0.9599999
		    Console.WriteLine(multiplied); // 0.96
		    Console.WriteLine(sub_sum == multiplied);
		    Console.ReadKey();
	    }

I could use the decimal data structure, but that would be suboptimal, as it has some overhead.
I tried rounding before, and it worked, but then the player simulations would be running slower than the server time, which I don't really want to deal with.

PARTNERS