# The Ultimate Fast Absolute Value

This topic is 4207 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

UPDATED !!! Bitwise operation-------------------- From Wikipedia, the free encyclopedia In computer programming, a bitwise operation operates on one or two bit patterns or binary numerals at the level of their individual bits. On many computers, bitwise operations are slightly faster than addition and subtraction operations and significantly faster than multiplication and division operations. I dont know if there was already existing fast absolute value in this forum but Id like to share my own . This is absolute value for short int. It will return a new value as unsigned short. inline unsigned short abss(unsigned short g) { if (g&32768u) return 32768u-(g&32767u); return (g); } inline unsigned int absi(int g) { if (g&2147483648u) return 2147483648u-(g&2147483647u); return (g); } inline unsigned long int absl(long int g) { if (g&9223372036854775808llu) return 9223372036854775808llu-(g&9223372036854775807llu); return (g); } inline float absf(float g) { *((unsigned int*)&g)&=2147483647u; return g; } inline double absd(double g) { *((unsigned long int*)&g)&=9223372036854775807llu; return g; } Hope you like its performance [Edited by - blueone on June 10, 2006 7:34:59 AM]

##### Share on other sites
Have you done any tests to show whether this beats out the math.h absolute value?

##### Share on other sites
Those functions are likely slower or equal in performance to std::abs/std::fabs. Also your implementations are dependent on the binary representation of numbers, with std::numeric_limits<> you could make it much more generic (but still not completely independent of the binary representation), but I can promise you the standard functions are already taking advantage of this if they gain anything by doing this. (I'm assuming C++, if you use C I believe you have abs and fabs which also should be efficiently implemented).

##### Share on other sites
Erm... You made an absolute function which takes an unsigned value? Here's a faster version of the same:

unsigned short abss(ushort g){return (g);}

And using branches will *not* be faster than the built in abs instruction. You might want to consider testing your code before proclaiming it "ultimate fast"... [wink]

##### Share on other sites
Your abs() is *horribly* slow with that branch. Try this on for size:

EDIT: Mine was horribly broken and I didn't even notice...Eric's below is quite nice

...broken source removed...

For 32-bit integers, other sizes are obviously trivial. Will work for both 1's and 2's complement arithmetic, but is still dependant upon the highest bit being set for a negative number (which is a fairly safe bet).

Of course, believing that any modern implementation isn't already doing this (or better) would be just plain silly.

[Edited by - joanusdmentia on June 10, 2006 8:45:00 AM]

##### Share on other sites
Heh. For floats and doubles on x86 there's the fabs instruction, which might (for all I know) be used in the library functions. Does 1 instruction count as high performance?

##### Share on other sites
This is the fastest absolute value you can have for a 32-bit int (assuming no native instruction):
inline int32 Abs(int32 x){    int32 a = x >> 31;    return ((x ^ a) - a);}

##### Share on other sites
Actually, Eric, your fast abs code is not portable. It relies on right shifting a *signed* integer, which is a no-no.

Your method relies on right shifting a negative value giving 0xFFFFFFFF, but some architectures will give 0x00000001 instead (C++ does not guarantee that the sign bit is preserved on a shift - it is implementation dependent whether right shift of a negative signed int does sign fill or not).

You could replace

int32 a = x>>31;

with

int32 a = ~(((x>>31)&1)-1);

which I believe is then portable.

Upon testing, I find that calling abs directly is FASTER (on my AMD64) than calling either Eric's function or my modified function (since the function call overhead outweighs the branch mispredictions). Placing them all on equal footing by wrapping abs in a function (stupid to do in real code) makes Eric's and my versions equally fast (weird, since mine has more basic operations, but they must pipeline well), and standard abs a little slower.

So: using abs directly is faster on my AMD64 over random signed integers than calling any function using these bit-tricks.

Remember folks - time your code. And be careful using clever bit-tricks from the internet - very often there are subtle errors in the code that will make your debugging become a nightmare.

Chris Lomont
www.lomont.org

##### Share on other sites

This topic is 4207 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.