.01f != 0.0099999998 !!!!!!!!!!

Started by
44 comments, last by Ravyne 16 years, 11 months ago
hey? Fixed Point is just converting floating points into another representation. Sin / Cos works just the same. For example, in a very basic way I guess, multiply the sin / cos result value by say, 32000.0f, and round to get a 16 bit number in the range [+32000, -32000], then convert back. Or use precomputed tables, or whatever.

Everything is better with Metal.

Advertisement
um, if you are reasonably anal about your compile options you can use CPU's FP units and assume they all have the same roundoff fuzziness.... the only gotcha is that if you want to do soemthing like : use SSE2 if avaialbe otherwiser use SSE if avaialbale otherwise use 3dNow, otherwise use 387... then you are in a headache as each path computers the numbers differently, IEEE floating point standard says exactly waht each arithmatic operation does, so the round off error is the same _IF_ your areusing an FPU that is following the standards.... for compatibiltiy, my knee-jerk though is to use 387 for game logic and the fastest you can get for everything else.



Close this Gamedev account, I have outgrown Gamedev.
Quote:Original post by oliii
hey? Fixed Point is just converting floating points into another representation. Sin / Cos works just the same. For example, in a very basic way I guess, multiply the sin / cos result value by say, 32000.0f, and round to get a 16 bit number in the range [+32000, -32000], then convert back. Or use precomputed tables, or whatever.


That would only work if sin/cos return the same value on different comps.

F*ck it, I'm just going to use floating point and just see if it works. Oh well.
If you are so worried about the inaccuracies changing a quality like hit or not hit/ die or not die/ succeed or fail then you could calculate that quality and send it with the data that tells each player that the unit attempted to do whatever. It's one extra bit. A fail safe on discrete jumps like that should limit noticeable game differences and the innaccuracies should be negligible in anything continuous (if handled correctly).
Quote:Original post by codingsolo
I am setting a float to 0.01f but it immediately gets the value of 0.009999999998! Any insight on the loss of precision of something that only has a hundredth of a decimal place?

Brandon


Some numbers like 0.1 or 0.01 are actually impossible to represent perfectly using the IEEE floating point representation, no matter how many bits of precision you have. This just down to the way the representation works. I won't go into the details of IEEE (there's plenty of good resources for this out there already) but basically the computer is trying to rerpesent 0.1 in this sort of way:

Ignore the additions in brackets ( they will cause the total to go over 0.1 )

0.1 = 0.0625 + 0.03125 + ( 0.015625 ) + ( 0.0078125 ) + 0.00390625 + 0.001953125.. and so on

Where each time 0.0625 is being halved after every + sign. Imagine also each + sign as one bit of precision being used. This sequence will tend towards 0.1 but never quite reach it, and eventually we will run out of bits and fail to reach the goal of attaining 0.1

This is the general idea of what is actually hapenning when you try to store a number like 0.01 in a float.

Quote:Original post by Washu
Quote:Original post by Daniel Miller
Quote:

The IEEE standard goes further than just requiring the use of a guard digit. It gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm. Thus, when a program is moved from one machine to another, the results of the basic operations will be the same in every bit if both machines support the IEEE standard. This greatly simplifies the porting of programs. Other uses of this precise specification are given in Exactly Rounded Operations.


Reading this leads me to believe that using floating point *is* safe for networked games that must be in sync each frame. Am I wrong here?

Yes, I've mentioned this in my most recent journal posting, but if you were to have one of your clients using SSE instruction sets, and another using the FPU, you can easily obtain results that differ in the least significant digits. This is due to a difference in the size of the registers used to calculate the numbers (32 bit float is widened to 80 bit on FPU, while it remains at 32 in the XMM registers).



Also, certain CPUs use different internal precision that can cause problems with cros-platform code. PPC macs, for instance, only support 64bit internal precision and, in the future, newer CPUs may get rid of the old x87 FPU and instead map floating-point ops onto SSE or a newer FPU design. Its really best to use a good epsilon (delta) value for comparisons.

As others have pointed out, the real problem is that you've designed your networking logic in such a way that it doesn't take the imprecision of floating point into account.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement