• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

ragonastick

Floating point vs. Fixed point

15 posts in this topic

Ok, in most 3d games, unless I''m horribly mistaken, the positions of objects are stored as floating point numbers. Now, to me, floating point does not seem to be the best solution for storing coordinates for objects like that. Why? Well, the prescision of the number depends on how close to the origin the object is. If the object is at the coordinates (2^50, 0, 0) and you decide to move it 2^-10 units along the x axis, there would be no change to the position of the object. But if the object was at the origin, then it would move. Surely, it would be better to have a uniform distribution of possible values throughout the world. Then, there are a lot of wasted bits, special cases for values like 0 (because the mantissa is normalised, it is impossible to represent 0 without a special case) and other special values for divide by zeros and square roots of negative numbers. With fixed point, there doesn''t need to be any exceptions, so the actual number of different values which can be stored is greater and it is uniformly distributed. So, the only thing I''m wondering about is speed issues, and support. Is fixed point slower? My logic tells me that it would be faster, but from what I''ve gathered, it is less used so possibly it is optimised, or maybe there is no inherant support for it in CPUs so maths functions would have to be rewritten causing them to be slower. Thoughts? Trying is the first step towards failure.
0

Share this post


Link to post
Share on other sites
I think you''ll find the problems you attribute to floating point also occur in fixed point. Fixed point doesn''t give you any extra precision, you just know exactly what the precision and range are. As long as you are careful with the range of your FP values you''ll be fine.

Aside from that, you have an FPU + SSE/3DNow unit that will help out speeding up the floating point operations in the video driver''s CPU calculations, and you''ll only have to convert your fixed point to floating ponit to get it down the graphics pipeline when you start doing translations for your objects.

~~~
Cheers!
Brett Porter
PortaLib3D : A portable 3D game/demo libary for OpenGL
0

Share this post


Link to post
Share on other sites
quote:
I think you''ll find the problems you attribute to floating point also occur in fixed point. Fixed point doesn''t give you any extra precision, you just know exactly what the precision and range are. As long as you are careful with the range of your FP values you''ll be fine.

Fixed point will give more prescision for larger values than floating point would for those larger values, but floating point would give higher prescision for lower values.

But, the actual number of different numbers which can be represented using a 32bit floating point number is less than the number which could be represented with a 32bit fixed point because of the exceptions for error codes an stuff like that.

Sure, being careful with the range will fix it, but then you are wasting prescious bits of memory.

quote:
Aside from that, you have an FPU + SSE/3DNow unit that will help out speeding up the floating point operations in the video driver''s CPU calculations, and you''ll only have to convert your fixed point to floating ponit to get it down the graphics pipeline when you start doing translations for your objects.

This is what I don''t understand. Why are floating point values used more than fixed point when fixed point seems to offer more advantages?

Trying is the first step towards failure.
0

Share this post


Link to post
Share on other sites
Just one thing, you cant represent 2^50 in DWORD, DWORD are 32 bit so the best you can do is 2^32, if you want negative numbers, its approx. 2^16, if you want floating point, depends on the ammount of precision !!

FixedPoint math was ditched a few years ago for some reason... FPU is almost as fast as normal integers and the precision subject, well use a double, still not enough (what the hell are u doing double should be enough for any 3d app) create your own, but it sure wont be 32 bit, it will be at least 128 bit or more (i did a class that could handle numbers up to 256 bits which had about 4 times the accuracy of a double but couldnt hold such higher numbers (didnt worked with mantisas) (calculating PI was a blast)
0

Share this post


Link to post
Share on other sites
Fixed point maths, is by no means ditched, it is just used in different ways... for example...pmulhw (the mmx instructon) is in effect multiplying an integer by a fixed point number, and returning the integer solution ie, it multiplies 16 bit fixed point numbers, with the decimal point fixed at the beginning of the number, by signed shorts.. and returns the upper word on teh solution ( the integer part )... I think fixed point numbers can be used very effectively for division in this way. I also agree however that fixed point numbers are not appropriate for modern 3D applications, due to their limited range. I also think however, that given the range of numbers involved in 3D engines, a float is totally appropriate, except for times when extremely high precision is needed... if you worry about the accuracy in absolute terms for large numbers, then scale your scene down to fit in a smaller range... you lose no accuracy, and the system is more flexible.
0

Share this post


Link to post
Share on other sites
One thing to keep in mind is that integer operations are well defined, whereas floating point operations can have small rounding differences between implementations (this is what I''ve read, correct me if I''m wrong).
There are certain schemes for things like demo recording or synchronised networking models that rely on the same calculations producing the same result across different machines that may need to take this into account.

Fixed point can also be useful in tight internal loops where converting to an integer quickly is important. Scaling an image in a software renderer for example, I.e the sort of loop you would consider writing in ASM. (Of course these days this kind of low level code is a lot rarer, and is more the domain of drivers than applications.)

But other than that, I''d stick to floats as much as possible unless you really have no alternative.
A floating point number with N bits of mantissa is always *at least* as accurate as a fixed point number of N bits.
(A standard float has 23 bits mantissa, a double: 52).

If you''re unsure about it, figure out some sensible numbers for the range you need and the granularity.
For example, if your playfield is 10000m x 10000m x 10000m and you want a granularity of .25mm (= .025m) then you want:
10000m / 0.025 m per division
=400000 divisions
~= 2^18.6 divisions
Which is less than 2^23, so a four byte float would be more than sufficient in this case.
0

Share this post


Link to post
Share on other sites
> This is what I don''t understand. Why are floating point values
> used more than fixed point when fixed point seems to offer
> more advantages?

They don''t. If available floating point is almost always better.

It''s true that fixed point dedicates more bits to precision, but at the cost of being limited to a narrow range. E.g. with 32 bit fixed point, with 31 bits below the decimal point (and 1 sign bit), you only get full precision between 0 and 1. With smaller numbers you get progressively less precision, worse than floating point for numbers less than about 0.005.

Even if this is not a problem the upper limit quickly is. E.g. suppose you used fixed point to represent coordinates x and y axes, and want to work out a Euclidian distance between two points. The simplest way using Pythagoras:

dist = sqrt((x1 - x2)2 + (y1 - y2)2)

Will generally overflow for any accurate fixed point representation. This can be overcome by dividing the inputs by a constant which you later multiply the result by. This can be done quickly using powers of 2 and bit shifts but it''s still a lot of extra work, and usually loses a few bits of accuracy in the process.

Worse it has to be done for every calculation other than addition and subtraction, and the multiplying/shifting factor needs to be determined each time, often by trail and error. E.g. it''s not until you plug numbers into the code and run it through a debugger that you see where the overflows are occuring and where scaling needs to be added or adjusted.

But even without all the above problems floats are today far faster than fixed point: anf processor with a FPU is optimised to do maths very efficiently on it, often with multiple/parallel execution units and long pipelines designed for optimised calculations. Integer units tend to be specialised towards bitwise operations, and often are very bad at basic operations such as multiplication and division.

Library functions such as sin and cos are often only available for floating point, or if they are available for fixed they will be for a different fixed format from the one you use, requiring more scaling and loss of accuracy.

In summary, yes, you can use fixed. Many developers are very experienced with it, having grown up using it on older console and PCs. But almost no-one uses it today on any platform that has an FPU, as the benefits of using floating point are far to great.
0

Share this post


Link to post
Share on other sites
And don''t forget that the dynamical systems folks have brought a lot to the table in terms of explaining how to get the most out of your floating point, and helped shave away some of the crud that crops up when you''re accumulating inaccuracies

ld
0

Share this post


Link to post
Share on other sites
What about for a game like a 2D sidescroller??

I have been thinking about using fixed-point
numbers to represent the X/Y coordinates
of objects in the level. I do not feel that
the range will be a problem with my numbers,
but I am affraid that somewhere down the line
I will need to cast my integers to floating-point
numbers to implement certain formulas...

-LeeB-
0

Share this post


Link to post
Share on other sites
quote:

A floating point number with N bits of mantissa is always *at least* as accurate as a fixed point number of N bits.
(A standard float has 23 bits mantissa, a double: 52).


True, but the floating point value will need more than N bits to be represented, so if you have a floating point and a fixed point, both the same size. There will be numbers which both can''t represent (floating point will have more range, and greater prescision at lower values, while fixed point will have smaller range with a constant prescision), but once the floating point number needs to be rounded off because it runs out of bits in the mantissa, there will be rounding error. The same thing will happen.

So the upper range of the floating point number is where (small) errors will occur. So you have to avoid using numbers in that range unless you can handle having the small errors. But that defeats the purpose of having such a large range. Plus, the accuracy you have for small numbers can''t be replicated for larger numbers so, depending on the actual use for a number, it might be unsuitable to use the extra accuracy (say the coordinate of an object, if it moves in .00001 steps, then after moving too far away from 0, it won''t be able to move at all. (extreme example I know, but it gets my point across =)

With fixed point, the accuracy can be guaranteed throughout the entire range, surely that would be more useful (for many things), since we can''t use the extra prescision or range in some circumstances.

I''m sure there is a logical reason, but I haven''t seen it yet. The extra range for calculations is handy, but still, there could be a loss of accuracy using floating points if values are getting quite high (not all the time, but depending on the actual situation, it could be just 1 small error which turns things nasty)

Trying is the first step towards failure.
0

Share this post


Link to post
Share on other sites
>the extra range for calculations is handy, but still, there could be a loss of accuracy using floating points if values are getting quite high

I think this is purely theoretical. I don''t think that in practice, anyone ever run into this problem while designing a game. It could be a problematic for some very specialized applications though.

But when it comes to performance: floating point operations such as multiplication and division are a lot faster than integer ones, at least on modern CPU''s / FPU''s (fmul is faster than imul, fdiv is *alot* faster then idiv, fadd is as fast as add, but it can be better pipelined on very well designed FPU''s such as AMD''s)

On the other hand, compares are alot faster on fixed point values: a simple cmp is enough, a floating point value needs an fcomp plus a costly flagword transfer operation.
0

Share this post


Link to post
Share on other sites
Even just having taken a class in the stuff, I can tell you that it DOES in fact matter:

difference of .001 in ONE coord in a particular vector calculation amounts to a several-orders-of-magnitude error (not always that bad of course, but it crops up in very common solution methods, including inversion).

ld
0

Share this post


Link to post
Share on other sites
* Floating point operations does pair. Pairing make a great difference (!).
* However, FP operations overlap - but this is only true for a very limited set of instructions.
* fmul is (unless my information is very outdated - this is from the Pentium time I reckon) is so very much slower than mul/imul. If I remember correctly fmul would take 49-50 cycles or so where as mul takes what, 4 or 8?
* You can use SIMD instructions to optimize fixed point math too (MMX).
* Uploading data into the FPU is slower than loading data into the CPU (Still, I don''t know how this is on modern P-IIII etc. processors).


"This album was written, recorded and edited at Gröndal, Stockholm in the year of 2000. At this point in time money still ruled the world. Capitalistic thoughts were wide spread. From the sky filled with the fumes of a billionarie''s cigar to the deepest abyss drenched in nuclear waste. A rich kid was a happy kid, oh..dirty, filthy times. Let this be a reminder."
- Fireside, taken from back of the Elite album
0

Share this post


Link to post
Share on other sites
> * Floating point operations does pair. Pairing make a great difference (!).

That''s correct. They even pair better than integer instructions.

> * However, FP operations overlap - but this is only true for a very limited set of instructions.

Almost all RISC FPU instructions overlap on modern FPU''s.

> * fmul is (unless my information is very outdated - this is from the Pentium time I reckon) is so very much slower than mul/imul. If I remember correctly fmul would take 49-50 cycles or so where as mul takes what, 4 or 8?

Whaaa, your information is _very_ outdated
fmul takes between 0.5 and 3 cycles on a modern FPU, depends on pairing and overlapping. imul between 4 to 6. Both can vary due to cache misses. imul can have AGI''s stalls, fmul not.

> * You can use SIMD instructions to optimize fixed point math too (MMX).

Forget MMX. FPU and 3DNow or SSE(2) are far better and faster. And: MMX and FPU are mutually exclusive due to the same register set. This does not apply to 3DNow / SSE.

> * Uploading data into the FPU is slower than loading data into the CPU (Still, I don''t know how this is on modern P-IIII etc. processors).

It''s the same, it even uses the same cache area. This only applies to fld type instructions, fild is slower, since the fpu has to convert types.

conclusion: floating point is almost always faster. Except for compares, where fixed point takes the lead.


0

Share this post


Link to post
Share on other sites
The only general (i.e. most always exists) is that fixed->integer is way faster than float->integer. As for precision, there''s SSE which lets you use HUGE floating point numbers. (Up to 128 bits, I think)

If you don''t need to convert to int and back often there''s few reasons not to use floating point.

Also, I''m not sure but can someone verify/disprove that on the mac there''s pretty much no speed difference between floats and ints? I''m no mac guy so I''m curious.
0

Share this post


Link to post
Share on other sites
> Also, I''m not sure but can someone verify/disprove that on the
> mac there''s pretty much no speed difference between floats and
> ints? I''m no mac guy so I''m curious.

On the Mac floating point is faster, sometimes much faster, than integer maths, even without taking account of AltiVec vector units. Key calculations are particularly efficient: single precision FP multiply is almost as quick as addition, and multiply-add is as fast as multiply. Also the large FP register set (32 registers, no overlap with SSE or similar) makes a lot of difference to speed for complex calculations.

I think this is to some extent true of all RISC processors with FP capabilities, though a lot depends on implementation details.
0

Share this post


Link to post
Share on other sites