• Create Account

### #ActualBacterius

Posted 22 December 2012 - 09:35 AM

deltaMs = (int) ((currentUpdateTime - lastUpdateTime)/(1000*1000));

In this line, the division is not a floating-point division because both operators are integers (long). You need to promote either operator (or both) to a floating-point type for the division to change from integer to floating-point, otherwise your integer cast will not really round anything - the result will be truncated down from the integer division, which'll cause the simulation to sometimes "jump" slightly forward nondeterministically depending on the current and last update times. It'll also overflow, for sufficiently large update times... but that shouldn't be a problem here.

Though your choice of deltaMs being an integer is puzzling. That automatically limits your accuracy to one millisecond, which will cause trouble if your framerate is not a multiple of 100. For instance, 30 frames per second is not exactly 33 milliseconds - it's 33.333... That'll cause you to "gain" one millisecond every three frames or so. Any reason you can't use a double as a delta time for your game?

Usually, playing with nanoseconds, milliseconds and seconds at the same time is an easy way to get lost in these precision issues. I recommend just using doubles everywhere and enforcing a unit of seconds for consistency.

### #2Bacterius

Posted 22 December 2012 - 09:35 AM

deltaMs = (int) ((currentUpdateTime - lastUpdateTime)/(1000*1000));

In this line, the division is not a floating-point division because both operators are integers (long). You need to promote either operator (or both) to a floating-point type for the division to change from integer to floating-point, otherwise your integer cast will not really round anything - the result will be truncated down from the integer division, which'll cause the simulation to sometimes "jump" slightly forward nondeterministically depending on the current and last update times. It'll also overflow, for sufficiently large update times...

Though your choice of deltaMs being an integer is puzzling. That automatically limits your accuracy to one millisecond, which will cause trouble if your framerate is not a multiple of 100. For instance, 30 frames per second is not exactly 33 milliseconds - it's 33.333... That'll cause you to "gain" one millisecond every three frames or so. Any reason you can't use a double as a delta time for your game?

Usually, playing with nanoseconds, milliseconds and seconds at the same time is an easy way to get lost in these precision issues. I recommend just using doubles everywhere and enforcing a unit of seconds for consistency.

### #1Bacterius

Posted 22 December 2012 - 09:34 AM

deltaMs = (int) ((currentUpdateTime - lastUpdateTime)/(1000*1000));

In this line, the division is not a floating-point division because both operators are integers (long). You need to promote either operator (or both) to a floating-point type for the division to change from integer to floating-point, otherwise your integer cast will not really round anything - the result will be truncated down from the integer division, which'll cause the simulation to sometimes "jump" slightly forward nondeterministically depending on the current and last update times.

Though your choice of deltaMs being an integer is puzzling. That automatically limits your accuracy to one millisecond, which will cause trouble if your framerate is not a multiple of 100. For instance, 30 frames per second is not exactly 33 milliseconds - it's 33.333... That'll cause you to "gain" one millisecond every three frames or so. Any reason you can't use a double as a delta time for your game?

Usually, playing with nanoseconds, milliseconds and seconds at the same time is an easy way to get lost in these precision issues. I recommend just using doubles everywhere and enforcing a unit of seconds for consistency.

PARTNERS