Does anyone use fixed point math anymore?

Started by
28 comments, last by Hodgman 11 years, 1 month ago

Modified code: http://pastebin.com/mvPR2snF
(Note: some changes are purely for preventing the optimizer from removing entire sections.)

I think you still have a number of issues in your code that skew the result.

One thing is that you do an float->int conversion for each int operation, this will likely slow down the int operations more then they should.

The other is that you use base 10 for the precision, you should use base 2, and then a lot of muls and divs will become shifts.

The old code had these issues, I believe you're quoting the wrong person.

Stop twiddling your bits and use them already!
Advertisement

Does anyone use fixed point math anymore for coordinates in games?

Depends if you think accounting makes for a fun game?

I'm thinking of trying to use the same techniques for a modern game. Would this make most of you die laughing and wonder what planet I've been on for the last 13 years or is it still fairly common?


Outside of the world of microcontrollers, or financial applications, there isn't a lot of sense to using fixed point anymore. There may be a reason it could still come in useful (e.g. compressing game assets using 16bit floats for example), but that's not really fixed point, and most sane people would simply convert to a 32bit float a.s.a.p. There simply isn't a case where a floating point op is any slower than the equivalent integer op (mostly, on most architectures, but there are exceptions)

I've always wondered about this question, and decided to spent the past 30 min using what I know to write a test in C, compiled using gcc with all optimization levels and reached the conclusion that fixed point is actually much slower.. again, this test was executed using only what I already know about doing fixed-point, and could be erroneous, but I feel confident in my work..

here's a screenie

http://dl.dropbox.com/u/62846912/FILE/fixedvsfloat.jpg

Those weren't exactly the result I would expect from quick test code, so I dove down into the code and made some changes to make it closer to a fair comparison.
The test still has the issue that it assumes that +,-,* and / are equally common.

Modified code: http://pastebin.com/mvPR2snF
(Note: some changes are purely for preventing the optimizer from removing entire sections.)

I can see some shifts disguised as mult / div here:


                iv = ((int64_t)iv*ivalue[2])/PRECISION;
                iv = ((int64_t)iv*PRECISION)/ivalue[3];

But you'd the compiler would already spot that....

                iv = (iv*ivalue[2]) >> 10;
                iv = (int64_t(iv) << 10)/ivalue[3];

The reason why using floating point is "wrong" and fixed point is "correct" is not that vertices within a model are in world space, but that entire objects are (necessarily) in world space. Though floating point will still "work" in many situations.

An object (person, car, box, whatever) near the origin might have a position accurate to 5 nanometers, which seems "cool" but is not really necessary -- nobody can tell. On the other hand, an object on the far end of the world might only have a position accurate to 15 meters.

Bang. It is now entirely impossible for a person to walk, as you can only move in steps of 15 meters, or not at all (and people just don't leap 15 meters).

Assuming you're working in metres and need 1mm of accuracy, this isn't much of a problem unless your game world is a few thousand kilometers across. Fixed point grants you a few more orders of magnitude, but at those scales you'd soon have to switch to a hierarchy of coordinate systems anyway.
I had precision problems with a world size of 10 kilometres, though it was only noticeable from some instability in the physics simulation.

The solution was not to switch to fixed point, but to store world positions in double precision. Everything thing else - relative positions, velocities, rotations - stayed in single precision, so it was practically a one line fix and required very little extra storage.

It's no slower, either: unless you are using SIMD, the CPU uses higher precision in registers anyway. It's only when you store to memory that a float becomes 32 bit.

In addition to the uses for fixed point already stated (e.g. large worlds,) it is also used to make code deterministic across machines/platforms/etc.

It is possible to do this with floating point code but your milleage may vary (in my experience it is challenging.)

One example of this is RTS games where inputs are broadcast to all clients and each client must update their state and stay in sync.

I actually experienced floating-point determinism being varied across machines in an engine which 'planted' trees around terrain at game load time using a procedural method of creating potential points all over the terrain and checking the slope of the ground at those points one at a time before generating a new point to test, and in some cases it would throw off the scene generation algorithm so wildly that the two machines in the same game would be in two geometrically different worlds due to the PRN code becoming out of sync when some trees would get planted and others wouldn't based on floating point precision nuances in the slope check.

If both were running the same executable, then this really can't happen. The result of a floating point instruction is defined exactly (the rule is that it should return the nearest floating point number to the exact result, and round to even on ties). Also, if you don't have options like fast math enabled, then the same thing is true for C/C++ code (at least if it's IEEE 754 compliant), so it's likely that this difference was caused by something else.

Fixed Point data still might have its uses to minimize memory storage space.

As long as the conversions are not done constantly (and obviously one or two bytes are enough of a range for the values needed.)

Example - for bone animations angles being applied with individual bones getting target angles and rates/time periods to Tween individually. With complex choreographed movements with upto 30+ bones in play -- that can be alot of data for just a few seconds of animation . And you may want to have many such animations kept in memory for the figures.

Each fixed point data value is generally converted once when its its applied as a delta to the floating point state variables.

Another use can be for packing data for attributes of sub-objects (like inventory items of an NPC - when you have hundreds of them - both NPCs and items). The inventory is infrequently used and a great deal of attribute data can be compressed down (....reminded me that packed data sent thru network likewise can be more efficient if data size is minimized)

There is a benefit of fewer cache misses when you have more data in less memory (the simulation I did before had huge data sets that no way could fit in caches even now). Possibly the smaller disk size of the data may speed up some of those operations.

Note these are specialized uses and much of the calculations they are associated with use floats if they are constantly used in calculations.

--------------------------------------------[size="1"]Ratings are Opinion, not Fact



In addition to the uses for fixed point already stated (e.g. large worlds,) it is also used to make code deterministic across machines/platforms/etc.

It is possible to do this with floating point code but your milleage may vary (in my experience it is challenging.)

One example of this is RTS games where inputs are broadcast to all clients and each client must update their state and stay in sync.



I actually experienced floating-point determinism being varied across machines in an engine which 'planted' trees around terrain at game load time using a procedural method of creating potential points all over the terrain and checking the slope of the ground at those points one at a time before generating a new point to test, and in some cases it would throw off the scene generation algorithm so wildly that the two machines in the same game would be in two geometrically different worlds due to the PRN code becoming out of sync when some trees would get planted and others wouldn't based on floating point precision nuances in the slope check.



If both were running the same executable, then this really can't happen. The result of a floating point instruction is defined exactly (the rule is that it should return the nearest floating point number to the exact result, and round to even on ties). Also, if you don't have options like fast math enabled, then the same thing is true for C/C++ code (at least if it's IEEE 754 compliant), so it's likely that this difference was caused by something else.


Ah, I wish things were this simple. The result of a floating-point operation is not defined exactly. For instance, the FPU might use a more precise type internally (say, 80 bits) and only round the results when they have to be put in memory. The x87 FPU does this, and I have seen programs where adding a debugging print statement changed the results of a computation, because now an intermediate result had to be written to memory.

You can set the FPU to only use 53 bits of mantissa internally (I primarily use double precision), and that takes care of that particular nasty behavior. Using SSE for floating-point computations also gets rid of the problem. But that's not the end of the horrors: For instance, computations on x86 and on Sparc still don't match to the last bit for operations like sin(), log() and exp(). The IEEE standard doesn't specify their behavior at all, so this is not surprising.

Someone in my company had to spend a lot of effort to ensure that we get reproducible behaviors in all our machines, and the solution involves using software implementations for some functions because the hardware implementations can't be trusted.

EDIT: I found this relevant link.

No, that will not solve the issue. The reason why using floating point is "wrong" and fixed point is "correct" is not that vertices within a model are in world space, but that entire objects are (necessarily) in world space. Though floating point will still "work" in many situations.

An object (person, car, box, whatever) near the origin might have a position accurate to 5 nanometers, which seems "cool" but is not really necessary -- nobody can tell. On the other hand, an object on the far end of the world might only have a position accurate to 15 meters.

Bang. It is now entirely impossible for a person to walk, as you can only move in steps of 15 meters, or not at all (and people just don't leap 15 meters). It is also impossible to distinguish 10 people standing in a group, since they all have the exact same position. It is further impossible to properly distinguish between objects moving at different speeds. A bicycle moving at 25km/h stands still. A car moving at 53.9 km/h stands still, but a car moving at 54km/h moves at 54km/h.

This is inherent to how floating point math works (it is a "feature", not so much a "bug"). Fixed point does not have that feature. A person can walk the same speed and be positioned the same at the origin or at the far end of the world.

It doesn't matter that the vertices you render are in their own coordinate system if the entire object cannot be simulated properly or if the entire object is culled away.

Actually, when used right it does solve the issue. The error you are making is assuming that everything will be rendered using a fixed origin. That is dumb and will cause the problems you described. But there is no point to do that.
You pick an origin based on your camera position and then render the scene. Everything close to the camera is then rendered in very high precision. Everything very far from the camera is rendered in decreasing precision, but you never notice because the error you are making will be much less than a pixel in image space.

For example my professional work is on a terrain renderer that can render the whole earth. The render space places (0, 0, 0) in the earth's center, so anything rendered on the surface is more than 6000km away from that. Doing that the dumb way or just with 32 bit floats would never work in sub-meter precision. You never notice though because everything rendering itself has a local origin with 64 bit precision while all geometry is actually stored and rendered using 32 bit floats.
Assuming six decimal digits for float accuracy (not a bad rule of thumb), you can of course only represent things 10km away the camera with an accuracy of around 1m. But with any reasonable camera, being off by 1m in a 10km distance will have exactly zero visible effect.

You pick an origin based on your camera position and then render the scene.

Samoth wasn't talking about rendering, but physics / gameplay movement.
i.e. If you're placing/moving objects in absolute world coordinates, then those coordinates break down at some distance from the world-space origin, in the gameplay update code.

Ah, I wish things were this simple. The result of a floating-point operation is not defined exactly. For instance, the FPU might use a more precise type internally (say, 80 bits) and only round the results when they have to be put in memory. The x87 FPU does this, and I have seen programs where adding a debugging print statement changed the results of a computation, because now an intermediate result had to be written to memory.

Yeah, but as he said, this doesn't matter if everyone is running the same program. Everyone will perform the same rounding operations as each other.

If you support cross-platform play, or cross-version play (e.g. playing back old replay files), then yes, you've got a big problem, as any little change in the code may cause a huge change in FP behaviour. In MSVC, the "precise" floating point compilation option helps with this, by rounding to 32-bits after every single damn FP operation, which is really slow, but makes the code obey the IEEE spec.

For example my professional work is on a terrain renderer that can render the whole earth. The render space places (0, 0, 0) in the earth's center, so anything rendered on the surface is more than 6000km away from that. Doing that the dumb way or just with 32 bit floats would never work in sub-meter precision. You never notice though because everything rendering itself has a local origin with 64 bit precision while all geometry is actually stored and rendered using 32 bit floats.
Assuming six decimal digits for float accuracy (not a bad rule of thumb), you can of course only represent things 10km away the camera with an accuracy of around 1m. But with any reasonable camera, being off by 1m in a 10km distance will have exactly zero visible effect.

That's a perfectly reasonable solution, but an argument could be made that fixed point is a better solution than floating point for your object positions. This is evident from the fact a 32-bit float gives you sub-metre accuracy while a 32-bit fixed gives you sub cm accuracy. I'm sure your doubles give you more than adequate precision, but 64-bit fixed point will give you more accuracy, and perhaps more importantly, more consistency. For rendering, consistency might not matter too much, but for physics simulations it occasionally can.

Tom Forsyth has a nice article on it http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[A%20matter%20of%20precision]]

This topic is closed to new replies.

Advertisement