#define DIV2 0.5f
#define DIV3 0.3333333333333333333333f
#define DIV4 0.25f
#define DIV5 0.2f
#define DIV6 0.1666666666666666666666f
#define DIV7 0.1428571428571428571428f
#define DIV8 0.125f
#define DIV9 0.1111111111111111111111f
#define DIV10 0.1f
#define DIV11 0.0909090909090909090909f
#define DIV12 0.0833333333333333333333f
#define DIV13 0.0769230769230769230769f
#define DIV14 0.0714285714285714285714f
#define DIV15 0.0666666666666666666667f
#define DIV16 0.0625f
#define DIV17 0.0588235294117647058823f
#define DIV18 0.0555555555555555555555f
...
Resulting in code such as:
float myFloat = someVal * DIV2;
instead of
float myFloat = someVal / 2.0f;
I figured "damn that's clever yet simple way of optimizing code!" But, of course, I decided to benchmark this out of curiosity, and multiplication has proven to be some 5 times faster than division WHOA......... in Debug. When I switched to release, the results were pretty much identical.
Out of curiosity, any thoughts why this "optimization" might have been used if not producing any real benefit while complicating the code? Just speeding up Debugging? Or maybe some older processors do handle mul faster than div?