# Is divide with low value a bad practice ? EPSILON needed ?

This topic is 533 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

I recently have one discussion about divide with low value and about the fact epsilon is needed.

One exemple is the ray-plane intersection code :

bool IntersectRayPlane( const CRay& Ray, const CPlane& Plane, float* t )
{
const float Numerator = Plane.DistanceToPoint( Ray.m_Origin );
const float Denominator = VectorDot( Plane.n, Ray.m_Direction );
if( Denominator != 0.0f )
{
*t = -( Numerator / Denominator );
return true;
}
return false;
}

The check "!= 0.0f" is there only to avoid the divide by 0.

Is it a bad practice to not check with epsilon on this case ?

Another case is the determinant of a matrix when computing the inverse.

In other word, epsilon should be used everywhere or on the case of divide by 0 it is ok ?

Thanks

Edited by Alundra

##### Share on other sites

I'd say it's very bad practice, but recently i see such code from people i assume knowing more than me so often i'm not sure anymore.

Personally i use those constants:

#define FP_TINY 1.0e-11f
#define FP_EPSILON 1.0e-5f
#define FP_EPSILON2 FP_EPSILON*FP_EPSILON

if( fabs(Denominator) > FP_EPSILON )

I use squared epsilon in cases like this:

float dot = vec.dot(vec);

if (dot > EPSILON2) vec /= sqrt(dot);

and tiny in cases where i don't want to use a branch:

vec /= sqrt(dot) + FP_TINY; // gives a zero vector if dot is zero

This works almost always for me, but there are few cases where it becomes necessary to do some more expensive checks like:

bool equal = min(size1, size2) / max(size1, size2) > 1.0f - FP_EPSILON;

But those cases are so rare i never cared to look how floating point works exactly, so i can't answer the question.

##### Share on other sites

Whoa - short question, looong answer - exactly the reason i was too lazy to go through this for now. I'll put it into todo-bookmarks... :D

##### Share on other sites

The check "!= 0.0f" is there only to avoid the divide by 0. Is it a bad practice to not check with epsilon on this case ?
Avoiding a divide by zero may be the only intent, but your code is indeed much more itelligent than you think, It does much more. :)

It also implements correctness, and an effective optimization (unluckily, for a case that practically never happens). If the dot product is zero, the ray is perfectly parallel to the plane, so there is absolutely no way you could have an intersection. It doesn't even make sense to try and perform the remaining calculations -- no matter what the result, there cannot be an intersection. It therefore makes sense to return false immediately (from a correctness point of view), and not to calculate the distance to point or the whole rest at all (from a performance point of view). Only just... sadly, exactly zero is something that never happens.

But of course, almost zero and zero is the same thing in practice since if the dot product is almost zero (i.e. less than epsilon), then this means there is an intersection somewhere, sure, but it is unbelieveably far away, so far that you are unlikely to be able to make something meaningful of it. The intersection is so close to "at infinity" that by all practical means, you can as well consider it "no intersection".

Now, the problem is, what value to use as epsilon? Surely, there must be a tried and proven, correct, practical value? Sadly that isn't the case. Since the value of epsilon depends on the magnitude of its surrounding operands, any answer that one might give will be wrong. While carefully chosen values of epsilon "work well" in many cases, that's generally not a good or correct approach and indeed highly dangerous (it may cause failure, months or years later, without providing any intellegible clue about why something unrelated suddenly doesn't work).

The only correct approach is to do a scale-relative compare as indicated in the Goldberg paper.

##### Share on other sites
Why not skip the check, happily divide by 0 and return infinity? That is a correct answer.

##### Share on other sites
Except if your code isn't designed to handle #INF, you're likely to wind up with #NaN and/or denormalized floats very easily, which can be nasty bugs and performance problems.

##### Share on other sites

Personally i use those constants:

#define FP_TINY 1.0e-11f
#define FP_EPSILON 1.0e-5f
#define FP_EPSILON2 FP_EPSILON*FP_EPSILON

I'm curious why you picked those specific numbers.  Adding on a generic sloppiness number can work in some cases, but it is rather crude.

The precision depends on the scale of the number and the operations being used.  For converting back and forth between binary and decimal you get about 6 decimal digits. If you stick with a single representation and only convert one way you get about 7 decimal digits.    That means if you're working in the millions range you lose precision is somewhere in the 1's or 10's. If you're working in the billions you lose precision somewhere in the thousands to ten thousands. If you're working in the tens or hundreds you lose precision in the hundred thousandths.

If the person happened to be working on something in a small scale than 1e-5f may be a perfectly acceptable big number. If they're working on the atomic scale than 1e-11 may also be appropriate.  But if you are working in the thousands those numbers are far too small.

Also, it is probably worth mentioning the std::nextafter() and std::nexttoward() functions. Those can help if you aren't really sure what the intervals are supposed to be for a given floating point number.

##### Share on other sites

Bruce Dawson has lots of entertaining entries about floating point on his blog, here is one dealing with choosing epsilon:

https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

Though in this case, since OP is always comparing something with the constant zero, a constant epsilon is fine.  That article might help with JoeJ's understanding of floating point, and choosing an epsilon, however.

Edited by ferrous

##### Share on other sites

I'm curious why you picked those specific numbers.

To be honest i picked those constants because i don't know better, but more than that i keep numbers at 'a size of one' when using them.

A lot of things already are: Unit vectors, angles, quaternions, range of human action when world unit is 1 m...

I know i have to put numbers in relation, care for their scale and difference etc. and with the years i got some sense for this.

But there remains some uncertainty, in my case leading to inefficient code like ALWAYS calmping values to (-1, 1) when using acos,

doing inefficient checks (probably equal = min(size1, size2) / max(size1, size2) > 1.0f - FP_EPSILON being a good example).

Also i use infinity very rarely and may miss a lot of cases where it would just work.

So that's it - thoughts of a self taught guy, and being that the paper linked by Apoch does not work well for me.

I only understand the things i already know but for more the math notation gets in the way quickly (as usual).

The world needs more papers for dummies :D

1. 1
Rutin
25
2. 2
3. 3
4. 4
JoeJ
18
5. 5

• 14
• 14
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631757
• Total Posts
3002133
×