What is the significance of 0.89f?

Started by
3 comments, last by irreversible 11 years ago

I was looking through some source code which said:

private static float GROUND_INERTIA = 0.89f;
private static float AIR_INERTIA = 0.89f;
And then I started having these questions:
Why 0.89 of all numbers and why is the above declared as a float instead of a double? Is it always like this? From my research, float is a 32 bit single precision number. Why not use a data type that covers more precision?
I also notice I have a project I work on, I used System.currentTimeMillis but it gives you it as data type long and the time value fluctuates between .89 and .95 when I use println to print it out.
From what I know from books: double has better precision than a long. Why not use a double? I only always use double and int when I declare variables.
Advertisement
No need to use double if you only need to represent something like 0.89.

Doubles are double the length of a float (duh), and possibly slower to operate on, so unless you need all the bits (long operations where errors can accumulate, actual need for high precision...), theres no reason to use double.

Especially since for example graphics like to use floats, people are used to them, and only switch to doubles if its needed.

Kind of like you use ints by default, instead of a 64 bit variant.

o3o

Why 0.89 of all numbers

In a lot of physics simulations, which it looks like this comes from, the numbers are kind of experimentally found and tuned to get something that "feels" or "looks" right. For example, you might have a MAX_SPEED value of 10.0f, and if you ask why 10.0f, you'll probably get the answer that because of the size of the game/map, and the way the character moves, etc. 10.0f gives a nice, balanced maximum speed. If you change the size of the map, or the way the character moves, or "zoom" the camera out, you might want to pick a different value.

Same thing with inertia. Objects have some kind of weight/mass, and therefore inertia, associated with them. These values were probably picked because they feel right or help make the game fun. There's no magic, though.

why is the above declared as a float instead of a double? Is it always like this? From my research, float is a 32 bit single precision number. Why not use a data type that covers more precision?

Floats were often faster than doubles, because they're usually smaller (floats are usually 32 bits, doubles are usually 64 bits). These days, however, depending on your processor, your floating point math might always be done in 64-bits, so a 32 bit float might be converted to 64 bits when math is done on it (which means the float may not be faster). On some processors, float might still be faster.

But also, float is smaller. You can fit more floats into your cache, and an object that uses several floats will use less memory than an object with several doubles (which might make a difference if you're working with lots of objects on a system that doesn't have tons of memory). Also, some hardware, like some GPUs, can only work with floats (and not doubles (or at least not without some hacks or speed penalties)). If you have to use floats in one part of your program, and you don't have a specific need for a double, you might as well use float throughout your entire program (then you have to do less casting and conversions... yay!).

Just because double gives you more precision doesn't mean you need that precision. Then again, just because float is smaller or sometimes faster, it doesn't mean you always need a float. So whether you should use float or double is kind of a "meh, do whatever fits your needs the best" situation.

From what I know from books: double has better precision than a long. Why not use a double? I only always use double and int when I declare variables.

long is an integer type, double is a floating point type. Totally different things.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

Why 0.89 of all numbers

It's probably an arbitrary choice or something found through experimentation (empiric observation)

Why not use a data type that covers more precision?

It is true that 0.89 cannot be represented exactly by a float, and neither can a double; but it can represent it more precisely.
However the precision difference is subtle unless you're into highly accurate scientific simulations.

Once we leave the tiny difference a part, main reason is performance. 32-bit calculation are between 1.25x-4x faster depending on the device/architecture that executes them, they need half as RAM, and they waste less memory because of padding alignment (i.e. a long followed by an double requires 16 bytes instead of 12; where a long followed by a float only needs 8 bytes).

From what I know from books: double has better precision than a long. Why not use a double? I only always use double and int when I declare variables.

Double is for 64-by floating point arithmetic (an approximation of "Real" numbers from algebra) while long is a plain 32-bit integer. They're used for entirely different purpose. The 64-bit integer version is "long long" (or "__int64")

Double takes as much as ram as an __int64, and they both take twice as ram as long.
In some devices long (or even long long) is astromically faster than double (i.e. the case of Android devices that lack a floating point unit), while in other architectures there's not much difference.

You can't do bitwise logic to floating point variables (i.e. double) and if you cast back and forth to & from __int64 to do it, you'll get two LHS (load hit store) on most architectures which is a very serious performance penalty.

If you try to use double to store a 32-bit pointer, you also need to cast it, which can cause an LHS.
If you try to use double to store a 64-bit pointer, you can't, big numbers will be truncated and the program in the best case scenario will crash.

That's because __int64 can go as big as 18.446.744.073.709.551.615; while double's max integer value that can be represented is 9.007.199.254.740.992; values bigger than that start to get rounded to the closest representable even number.
long can go as big as 4.294.967.295 btw.

You may want to pass by and look at common mistakes list; and also experiment with an interactive floating point to binary converter (in java) that has educational purposes

Could also be a typo.

This topic is closed to new replies.

Advertisement