#### Archived

This topic is now archived and is closed to further replies.

# Fast Percentage

This topic is 5021 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

This is a really simple question. I'm trying to get an accurate percentage without using floating points. This is how I would calculate a percentage with points: fcurrent * (100.0 / fmaximum) With fcurrent and fmaximum being FLOATs or DOUBLEs. Is this a crappy way to accomplish this? Is there a better way? Anyways, this is how I've attempted doing it without points: (current * (25600 / maximum)) >> 8 It's the same deal, but I'm making my own point system with the lower byte. I'm not a math wiz, so criticism will not harm me. I hate that I have to divide to accomplish this, but I know no other way. Thanks for any advice! EDIT - I forgot to mention that maximum will be in the range of 0 - 9999. The higher it is, the less accurate my percentage, right? Hmm, would it be safe to use the entire lower 16 bits for a point system? [edited by - Jiia on March 18, 2004 2:59:24 AM]

##### Share on other sites
You really shouldn''t be bothering yourself about performance until you measure that the thing you want to optimize is a performance bottleneck.
I really can''t imagine why percentages would be a bottleneck.
besides, if you have computed 100.0/fMaxValue once, then computing the % is just a matter of a single fpu operation, it''s hard to be faster than that. You can probably do billions of those per second on a modern cpu.

##### Share on other sites
It''s more about removing casts to FLOATs than improving speed or performance. My coordinate system is made up of regular 32 bit integers. To get a percentage, it would look like this:

perc = (LONG) FLOAT(current) * (100.0 / FLOAT(maximum))

Which is really ugly. And sometimes the compiler doesn''t want to calculate the values in correct order, so I have to do this as well:

perc = (LONG) FLOAT( FLOAT(current) * (100.0 / FLOAT(maximum)) )

To show it that I want the value calculated as a FLOAT, *then* casted. I really wanted to get rid of all those casts.

By the way, it won''t help me to store the 100.0 / maximum calculation. The whole calculation is done for thousands of different values, and none with the same maximum number twice.

Thanks for the reply!

##### Share on other sites
have you tried:
percent = (long)(100.0*fcurrent/fmaximum)

that way it will work

Otherwise, use some rounding function

##### Share on other sites
Theres an old trick thats not very accurate, but pretty damn fast.

First, rather than using 100 as your divisor (x*y/100), you pick a power of 2, so that we can use bitshifting, rather than division. For the purpose of this demonstration, I'm going to use 1024. Next, I need a lookup table, containing 100 values, 0->99. The 0 isn't neccessary, but thats a minor detail. The values in this table will correspond to the percentages of 1024. So, naturally lookup[50] should yield 512, and lookup[1] would be 10.

Result = Number * Lookup[Percent] >> 10;

Now, as for the reverse, to find what percent a number is of the other, you can't get out of the division, but you can drop the multiplication like so.

Result = (Number << 10)/Other;

That much will tell you what percent the other is of the number, but in terms of 1024. If you remember that we still have a sorted lookup table, you could binary search that for your number, though, by then we've created a bigger bottle neck then we had originally, so I'd recommend just keeping the number in terms of 1024 and adjusting the math accordingly.

[edited by - inmate2993 on March 18, 2004 2:16:49 PM]

##### Share on other sites
fair enough that you can make calculating percentages faster with some bitshifting. But if calculating percentages is the actual bottle-neck of your engine then you better have an E3 booth because your game is going to be a thousand times better than Doom3 or Half-Life 2.

spending your time on making simple calculations like this faster is a total waste of time for any game i can think of. the odds that something like this are a performance bottleneck are about zero. assuming that this is a game optimization, of course....

-me

[edited by - Palidine on March 18, 2004 2:22:49 PM]

##### Share on other sites
unless he is doing a percentage game

##### Share on other sites
In which case he should stop right now hehe.

##### Share on other sites
Inmate2993 -> That''s a really clever way to rid of the divide

Palidine -> I don''t care much about how fast it is, I just want to avoid using FLOATs.

I''m not sure what everyone has against optimization. Perhaps I want to allow gamers too poor to afford decent computers to play my game. Is that a big deal? Maybe every clock tick counts.

By the way, once a game is complete, there are going to be billions of calculations per second going on while you''re playing. If you want to toss everything into the "no need to optimize bin", you''re going to have a damn slow game. Then even if you profile the game, your level/map loads and anything else that only happens at a user level is going to crawl like mud. But then, most professional games suffer from the same fate.

##### Share on other sites
quote:
Original post by Jiia
I''m not sure what everyone has against optimization. Perhaps I want to allow gamers too poor to afford decent computers to play my game. Is that a big deal? Maybe every clock tick counts.

The reason everyone is "against optimization" is because you''re worrying about the low-level optimisations, generally is far more milage in algorithmic improvements, quite often you can save many thousands of cycles by researching and using a better algorithm, compared to the odd hundred you might pick up by low level optimisations; consider for example sorting a list; suppose you were using a bubble sort: you could try to speed it up by streamlining the compare and the position swapping code and perhaps halve the execution time (and probably make the code unreadable at the same time...but that''s another story) or you could choose a better algorithm - for example the quick sort algorithm, who''s comparative improvement will be huge because you carry out far fewer iterations to sort the list - and its effect is disproportionatally noticable on longer lists.

##### Share on other sites
My current algorithmic approach is to use a percentage of a character''s current stamina points to increase time delay on his animation frames. Trivial, perhaps. But there may be well into the hundreds of characters changing frames every 30 milliseconds.

I don''t see how another algorithm will help me.

I can''t count the number of professional games I''ve played that slow down when the enemies begin to mass around you. Do these companies not profile and improve the biggest bottlenecks? Or did they just ignore too many small bottlenecks?

Although using the 128 or 256 based percentage would be a bit difficult to read, just defining a 100-based GETPERCENT(cur,max) which shifts things around is not.

If it''s about wasting time thinking about small details, I waste far more time typing on forums

##### Share on other sites
quote:
Original post by Jiia
My current algorithmic approach is to use a percentage of a character''s current stamina points to increase time delay on his animation frames. Trivial, perhaps. But there may be well into the hundreds of characters changing frames every 30 milliseconds.

I don''t see how another algorithm will help me.

I can''t count the number of professional games I''ve played that slow down when the enemies begin to mass around you. Do these companies not profile and improve the biggest bottlenecks? Or did they just ignore too many small bottlenecks?

To be honest I don''t see a percentage calculation being your major bottle-neck; things I''d worry about more are say the AI for NPCs, the physics engine and the overall renderer.

What I was trying to get at earlier is that you should be examining the higher [than percentage calculation] level algorithms.

If you''re really keen on going as low level as possible I seem to remember that the AAD instruction on the x86 can be hacked to do division to an arbirary (small, < 1 byte) denominator.

WRT professional games, don''t forget they are making an engineering decision about when the game will run sufficiently well on most user systems, because they need to get it out of the door as quickly as possible to start making money, where as the longer they keep it in development the more money it costs them. So yes, they will optimise the game as much as possible, starting with the largest bottlenecks and working down until the remaining bottlenecks are so small they don''t impact too severly on most consumers (note: this still means that it might be acceptible for a consumer to have a noticable, but not game degrading, slowdown for perhaps one or two climatic points in the game)

##### Share on other sites
quote:
Original post by Jiia
If it''s about wasting time thinking about small details
It is; time spent on very small optimizations is time not spent on more important optimizations.

Furthermore, there is another reason why we say "Make it work, then make it fast"; clarity of intent. 34 / 2 is much clearer than 34 >> 1, for instance.

Cédric

##### Share on other sites
Hrm, animation frames eh? I can see the need for using a faster algorythm. However, there may be a better algorythm than using percentages.

For instance. Lets assume that we want Stamina 20 to be twice as fast as Stamina 10. And Stamina 40 to be twice as fast as 20. What we do first is find who has the fastest stamina, through a quick search you can perform during the fade in of the level. Make that number accessable for all objects. Lets assume its 40. Next, every object gets a frame counter, and a frame wait meter. Then, for all objects, do like so:
FrameWaitMeter += Stamina ;if ( FrameWaitMeter > MaxLevelStamina ) {  FrameWaitMeter -= MaxLevelStamina ;  FrameCounter += 1 ;}

1 addition and cmp/jmp operator, with perhaps an extra sub and add operator, depending on the stamina of the object.