Determining player size and speed for optimal floating point accuracy

Started by
3 comments, last by alvaro 2 months ago

I often wonder why games use the unit scale that they do. How are the numbers determined? If 32-bit float becomes problematic in the tens of thousands, why not scale everything down to where there is significantly more precision?

For example, Quake uses one texel == one world unit, which comes out to the player being 48 units tall and running at 320 units per second. If the player was 24 units tall and ran at 160 units per second, would it be safe to double the maximum world dimensions relative to the player (so keep it the same in raw units) or would that introduce precision issues elsewhere?

Advertisement

Due to the exponent representation of floating point numbers, the scale of the numbers doesn't matter, only their relative values. You can multiply or divide all values by 10^6 and produce similar results. The main considerations are overflow and underflow. Overflow can happen if you use very large numbers (e.g. astronomical in scale). For instance, when calculating the euclidean distance between the sun and pluto in units of nanometers, the squaring inherent in distance calculations causes overflow beyond the 10^38 maximum for 32-bit floats. Underflow is the opposite issue and can be encountered if values are very small (near 10^-38). This most usually occurs with decaying exponential functions (e.g. compute x *= 0.5 recursively), which are encountered often in signal processing with IIR filters.

With 32-bit floats you get 24 bits of precision (23 mantissa bits plus one extra bit always assumed to be 1). This means that if you have a number with value X, that the precision near X is X*2^-24. So, if you want to have a precision of no worse than 0.1mm (10^-4 m), then the maximum distance from the origin is 10^-4 * 2^24 = 1678 meters (1.678km). If you can tolerate 1mm precision, then you can push it to 16.78km. You also get a sign bit so really you can do +/-16.78km for an actual area of 33.56km x 33.56km. Beyond that you need more complex tricks (floating origin, multiple origins, double precision) to produce larger worlds.

In different terms…

You get 6 decimal digits, 23 bits, of precision. 23 bits works to be almost almost 7 full decimal digits. Those 6 decimal digits are guaranteed, the 7th decimal digit might be rounded.

They're called floating point because those digits in a floating point number can float their decimal point. It can be 1.23456, or it can be 123456000000000 or it can be 0.00000123456, it's only those first few significant digits that matter. You can shift the floating point quite a long way. Assuming I count them correctly, that means 1234560000000000000000000000000000000 or 0.000000000000000000000000000000000000123456 or anywhere in between. The hard limits are just a little bit past that, 3.402823e+38 on the top end and 1.175494e-38 on the low end. You can float it as far as you want in the range, but you still only get those first few digits of precision. So if you need 123456000089, those final numbers are going to be rounded into oblivion.

As Aressera showed above, if you stay in a range most games do, you can reach about 16 kilometers from the origin on either axis before it becomes apparent to the player and you need to do fancy stuff. If you can stay within about 1.6 kilometers you can have one more decimal digit of precision.

Using floating-point numbers has the big advantage that the choice of scale matters very little. But you pay for this in non-uniform precision as you get away from the origin, in difficulty reproducing results across platforms/compilers/libraries/versions, and in being tempted to use “epsilons” in your comparisons all over the place, which often make code fragile.

In many cases I think we would be better off using fixed-point precision, and then you do have to think carefully about your choice of scale.

This topic is closed to new replies.

Advertisement