# Why does divison by 0 always result in negative NaN?

## Recommended Posts

0.0 / 0.0 = -NaN

-0.0 / 0.0 = -NaN

0.0 / -0.0 = -NaN

-0.0 / -0.0 = -NaN

Huh? Why division by zero always gives negative NaN (as opposed to positive NaN)? Does anybody know? I imagine it may have something to do with the NaN exception (negative NaN is quiet NaN), but I want to know if somebody else knows for sure.

Edited by Sik_the_hedgehog

##### Share on other sites

According to Wikipedia:

For example, a bit-wise IEEE floating-point standard single precision (32-bit) NaN would be: s111 1111 1axx xxxx xxxx xxxx xxxx xxxx where s is the sign (most often ignored in applications), a determines the type of NaN, and x is an extra payload (most often ignored in applications). If a = 1, it is a quiet NaN; if a is zero and the payload is nonzero, then it is a signaling NaN.[3]

So the sign bit does not mean that it's a qNaN, and there is no guarantee that all divisions by 0 will give you a signed NaN. So no matter the reason you're getting signed NaN, you have to (should) ignore it.

##### Share on other sites

Oi, for some reason I thought the sign bit was used for the signalling toggle ????? It makes even less sense that it returns -NaN though. Maybe it's just setting all the bits?

And yeah, the NaN and -NaN thing is annoying, much like having separate 0 and -0 (let's face it, how often is a signed 0 useful?).

EDIT: checked now, the value returned is 0xFFC00000. OK huh, I'm completely clueless what's the logic behind this now. I know that in practice this doesn't matter at all (a NaN is still a NaN in the end) but still, I'm curious.

EDIT 2: of course I should have remembered that the C and C++ standards would get in the way. What happens when I build normally:

+0 / +0 = -NaN
+0 / -0 = -NaN
-0 / +0 = -NaN
-0 / -0 = -NaN

What happens when I tell the compiler to ignore trying to preserve floating point ordering and such (i.e. -ffast-math in GCC):

+0 / +0 = 1
+0 / -0 = -NaN
-0 / +0 = -NaN
-0 / -0 = 1

For those who wonder, this is the test program (assumes that both int and float are 4 bytes):

#include <stdio.h>
#include <string.h>

int main() {
float z1, z2;
float f;
unsigned i;

z1 = 0.0f;
z2 = 0.0f;
z2 = -z2;

memcpy(&i, &z1, 4);
printf("+0 = %08X\n", i);
memcpy(&i, &z2, 4);
printf("-0 = %08X\n", i);

f = z1 / z1;
memcpy(&i, &f, 4);
printf("+0 / +0 = %08X\n", i);

f = z1 / z2;
memcpy(&i, &f, 4);
printf("+0 / -0 = %08X\n", i);

f = z2 / z1;
memcpy(&i, &f, 4);
printf("-0 / +0 = %08X\n", i);

f = z2 / z2;
memcpy(&i, &f, 4);
printf("-0 / -0 = %08X\n", i);

return 0;
}

EDIT 3: OK nevermind it was an optimization getting in the way, dividing by itself gets optimized to 1 >_> Though for some reason this happens even with -O0 (which literally should generate the most unoptimized code it could ever attempt to make), so I guess this may be happening in the processor itself. Using separate variables makes it work as intended (gives -NaN in all four cases).

Edited by Sik_the_hedgehog

##### Share on other sites

This not only means you cannot rely on the result making sense, but the compiler can do whatever it wants with it (under the "division by 0 can't happen, so this branch of code can't execute" logic).
That's true for dividing by the compile-time constant zero, not otherwise. You are of course perfectly right in this example, since division by zero is trivial for the compiler to prove in the above sample code.

However, in general it's not true. At the risk of beating a dead horse by bringing up the same topic again as two weeks ago: No, the compiler, in general, cannot just do whatever it wants. If you divide some float by some other float, the compiler has to, and will, emit code that does just that.

Although it may, of course, add supplementary code to every division which checks whether the denominator is zero and calls an error handler (or similar), but it may not just do just about anything. That includes eliminating branches. Unless it can prove that the denominator will be zero at compile-time already, it may not optimize out code (or... format your hard disk ).

Manually checking the denominator before every operation is quite a bit of overhead, so what usually happens is that the C++ compiler simply emits a "divide" assembly instruction and lets the hardware deal with whatever comes around, which either gives a valid result, or generates a trap (for which the compiler usually installs a handler at program start, so an error function and finally abort is called) or just a silent NaN, like here.

##### Share on other sites

In C++, division by zero is classified as "Undefined behavior"

Doubt that applies to floating point... at the least not on platforms that use IEEE that as mentioned in previous posts explicitly states what happens on division by zero.

EDIT: The standard also explicitly states that floating point in itself is implementation-defined and that division by zero etc. can vary by platform. Either way this case is determined by the float specification, not the C++ specification.

Edited by Erik Rufelt

##### Share on other sites

This not only means you cannot rely on the result making sense, but the compiler can do whatever it wants with it (under the "division by 0 can't happen, so this branch of code can't execute" logic).

That's true for dividing by the compile-time constant zero, not otherwise. You are of course perfectly right in this example, since division by zero is trivial for the compiler to prove in the above sample code.

However, in general it's not true. At the risk of beating a dead horse by bringing up the same topic again as two weeks ago: No, the compiler, in general, cannot just do whatever it wants. If you divide some float by some other float, the compiler has to, and will, emit code that does just that.
Although it may, of course, add supplementary code to every division which checks whether the denominator is zero and calls an error handler (or similar), but it may not just do just about anything. That includes eliminating branches. Unless it can prove that the denominator will be zero at compile-time already, it may not optimize out code (or... format your hard disk ).

Manually checking the denominator before every operation is quite a bit of overhead, so what usually happens is that the C++ compiler simply emits a "divide" assembly instruction and lets the hardware deal with whatever comes around, which either gives a valid result, or generates a trap (for which the compiler usually installs a handler at program start, so an error function and finally abort is called) or just a silent NaN, like here.

Which is kind of what this whole thing was about. The OP asked for why divide by zero worked a certain way, then provided example code that the compiler could very easily determine when a divide by zero took place. Therefore all bets are off.

The compiler can do anything - from your perspective. You have no way to predict what it will do. It's allowed to make assumptions where "x/y" means that y will never be 0, so later on it could eliminate a "y == 0" test. If you want to play around with undefined behavior, that's your prerogative, but let me know what you make so I can avoid it ;)

I suggest you read the second page to the article I posted, written by a Clang developer, that shows exactly how undefined behavior can lead to eliminated test code (in this case, based on a "clearly unneeded" null check)

In C++, division by zero is classified as "Undefined behavior"

Doubt that applies to floating point... at the least not on platforms that use IEEE that as mentioned in previous posts explicitly states what happens on division by zero.

EDIT: The standard also explicitly states that floating point in itself is implementation-defined and that division by zero etc. can vary by platform. Either way this case is determined by the float specification, not the C++ specification.

Was unable to find anything in the standard regarding math and floating point values. Only that the value representation of floating point values is implementation-defined (3.9.1.8 of C++14 working draft - since it's free), and that the result of division by zero is undefined (5.6.4 of same standard - which simply refers to "arithmetic types", not integer or floating point - see modulus where it's specified it must be an integral type).

The standard does not mandate any floating point specification (of which there are more then one, btw).

Will the compiler emit a divide opcode when it has no way to tell what the values are? Of course. But, again, it's allowed to make assumptions about the denominator. (See the OP's example where he got 1, even when dividing by zero) Edited by SmkViper

##### Share on other sites

However, in general it's not true. At the risk of beating a dead horse by bringing up the same topic again as two weeks ago: No, the compiler, in general, cannot just do whatever it wants. If you divide some float by some other float, the compiler has to, and will, emit code that does just that.
Although it may, of course, add supplementary code to every division which checks whether the denominator is zero and calls an error handler (or similar), but it may not just do just about anything. That includes eliminating branches. Unless it can prove that the denominator will be zero at compile-time already, it may not optimize out code (or... format your hard disk ).

I'm not sure what you mean.
The following snippet triggers UB:

int result = numerator / denominator;
if( denominator == 0 ) //UB: Denominator couldn't be 0 by now. Branch can be left out.
{
result = 0;
doSomething();
}

float result = numerator / denominator;
if( denominator == 0 ) //UB: On architectures that don't follow IEEE, Denominator couldn't be 0 by now. Branch can be left out.
{
result = 0.0f;
doSomething();
}

There has been way too many security bugs in real world code caused by optimizing compilers removing important pieces of code due to them being executed after invoking undefined behavior instead of being executed earlier.

##### Share on other sites

I'm sure there are cases where bad things occur, and I certainly don't mean to disagree with those points, but it's often a clearly defined operation, and Infinity and NaN can both be acceptable floating point values. For game development I would consider the assumption of division by zero being OK perfectly reasonable, unless targeting a specific platform where it's known to be a problem.

Under that assumption, '0 / 0' is also an exceptionally exceptional case, whereas 'anything else / 0' is infinity, and identical to 'very large / very small' which is also infinity even though both the numerator and denominator are valid finite values. Division by zero is not the only way to get Infinity or NaN, and several standard math functions often return these values (log / sqrt etc).

When doing sequential operations on floats that may or may not end up as zero, checking for edge-cases between every single operation is not really desirable.. and checking the final result for Inf/NaN is probably better. If aiming for SIMD it's very desirable to not have to worry about it.

##### Share on other sites

I'm sure there are cases where bad things occur, and I certainly don't mean to disagree with those points, but it's often a clearly defined operation, and Infinity and NaN can both be acceptable floating point values. For game development I would consider the assumption of division by zero being OK perfectly reasonable, unless targeting a specific platform where it's known to be a problem.
Under that assumption, '0 / 0' is also an exceptionally exceptional case, whereas 'anything else / 0' is infinity, and identical to 'very large / very small' which is also infinity even though both the numerator and denominator are valid finite values. Division by zero is not the only way to get Infinity or NaN, and several standard math functions often return these values (log / sqrt etc).
When doing sequential operations on floats that may or may not end up as zero, checking for edge-cases between every single operation is not really desirable.. and checking the final result for Inf/NaN is probably better. If aiming for SIMD it's very desirable to not have to worry about it.

Division of 0 by 0 does not result in infinity (if you think it does, graph the approach to it from positive and negative numbers). In the IEEE floating point case it results in NaN.

The undefined behavior essentially involves any case whereby you make the assumption that the denominator is not 0 and then later act upon the denominator being 0. As shown in the linked clang post and the previous post to yours. Most compilers, if you attempt to write out an immediate divide by zero using constants, will display an error or warning. However, when you have variables whose value might be zero and you make a code assumption that they are NOT zero, then later attempt to perform conditional actions dependent upon it being zero the compiler may (or may not, it is after all undefined behavior) optimize out those sections as you did, after all, tell it that it was perfectly fine before. Edited by Washu

##### Share on other sites

Just to make it clear: the question was not about the compiler (I know that dividing by zero could result in the universe imploding if the compiler decided so), it was about why the FPU would ever return -NaN instead of NaN in the first place.

I made a simpler program and checked the assembly code to be 100% sure this time:

    movl    .LC0(%rip), %eax
movl    %eax, -12(%rbp)
movl    .LC0(%rip), %eax
movl    %eax, -8(%rbp)
movss    -12(%rbp), %xmm0
divss    -8(%rbp), %xmm0
movss    %xmm0, -4(%rbp)

This results in -NaN, so yeah, it's definitely the FPU doing it.

So the question is why would it generate -NaN and not +NaN? Like, is there any practical reason from a hardware viewpoint for it or what? (also amusingly, if I set optimizations to max with that program I get +NaN, though in this case the compiler is using a constant so the FPU isn't involved at all)

##### Share on other sites

Just to make it clear: the question was not about the compiler (I know that dividing by zero could result in the universe imploding if the compiler decided so), it was about why the FPU would ever return -NaN instead of NaN in the first place.

I made a simpler program and checked the assembly code to be 100% sure this time:

    movl    .LC0(%rip), %eax
movl    %eax, -12(%rbp)
movl    .LC0(%rip), %eax
movl    %eax, -8(%rbp)
movss    -12(%rbp), %xmm0
divss    -8(%rbp), %xmm0
movss    %xmm0, -4(%rbp)
This results in -NaN, so yeah, it's definitely the FPU doing it.

So the question is why would it generate -NaN and not +NaN? Like, is there any practical reason from a hardware viewpoint for it or what? (also amusingly, if I set optimizations to max with that program I get +NaN, though in this case the compiler is using a constant so the FPU isn't involved at all)

Because it can? Both are valid, so its up to the processor vendor to decide.

##### Share on other sites

Division by 0 does not result in infinity (if you think it does, graph the approach to it from positive and negative numbers). In the IEEE floating point case it results in NaN.

The Wikipedia entry specifically states that it can return infinity, though I'm not that familiar with the subtleties to know if it will always be the case. Experimental results in VC++ show infinity.

for(uint32_t i = 5; i > 0; --i) {
uint32_t uival = i - 1;
float fval = *reinterpret_cast<float*>(&uival);
float fres = 1.0f / fval;

std::cout << "1.0f / " << fval << " = " << fres << std::endl;
}


The undefined behavior example is definitely a good one, and makes sense.

I would however still be most upset if a program compiled to perform divisions on floating point values would optimize out a later statement depending on the denominator, just because it was used in an earlier operation, if that operation is completely valid and the denominator is unknown.

There is the case of floating point exceptions, but those can be triggered by many more things than division by zero, such as denormal values and overflow.

I'm not entirely sure we're discussing the same thing though.. again I don't think I disagree with the points raised.. I just feel that if the compiler started assuming floating points denominators are never zero it could probably cause a lot more subtle problems than optimizing out code like that..

##### Share on other sites
It is not true that floating-point division by zero results in undefined behavior. If your compiler and library follow IEEE-754, 1.0/0.0 is Infinity and 0.0/0.0 is some sort of NaN. I believe until recently my libc implementation would not print the sign of a NaN, but now it does.

##### Share on other sites

The Wikipedia entry specifically states that it can return infinity, though I'm not that familiar with the subtleties to know if it will always be the case. Experimental results in VC++ show infinity.

Yes, if you divide a number A by 0 where A is NOT +/- 0 then you will get +/-infinity in IEEE floats. I should have been clearer in my statement as I was dealing with 0/0. In mathematics lim x->0 1/x diverges.

The undefined behavior example is definitely a good one, and makes sense.
I would however still be most upset if a program compiled to perform divisions on floating point values would optimize out a later statement depending on the denominator, just because it was used in an earlier operation, if that operation is completely valid and the denominator is unknown.
There is the case of floating point exceptions, but those can be triggered by many more things than division by zero, such as denormal values and overflow.

I'm not entirely sure we're discussing the same thing though.. again I don't think I disagree with the points raised.. I just feel that if the compiler started assuming floating points denominators are never zero it could probably cause a lot more subtle problems than optimizing out code like that..

If you tell the compiler that it can make some assumptions (such as by doing a / b without testing if b is zero first), then the compiler can only assume that since you desire a well defined program b must not be zero. As such any condition that relies upon b being zero AFTER a / b must clearly be false.

##### Share on other sites

If you tell the compiler that it can make some assumptions (such as by doing a / b without testing if b is zero first), then the compiler can only assume that since you desire a well defined program b must not be zero

My point is that a program where b is sometimes but not always zero is still well-formed, under normal floating point conditions. If that is strictly incorrect then please enlighten me, but I really don't see that being the case.

##### Share on other sites

If you tell the compiler that it can make some assumptions (such as by doing a / b without testing if b is zero first), then the compiler can only assume that since you desire a well defined program b must not be zero

My point is that a program where b is sometimes but not always zero is still well-formed, under normal floating point conditions. If that is strictly incorrect then please enlighten me, but I really don't see that being the case.

Well defined:

if(b == 0) {
someFlag = true;
return;
}
result = a / b;


Not so well defined:

result = a / b;

if(b == 0) {
someFlag = true;
return;
}


The problem here is that result = a / b tells the compiler that "b must not be zero, because that would be UB." Because the compiler, since it assumes your program is WELL DEFINED, can now assume the invariant b != 0, then the latter if statement if(b == 0) becomes if(false).

Edited by Washu

##### Share on other sites

I understand the logic, and for integers I'm agreeing without reservations.

I also accept that writing code like that might a bad idea, even for floats.

However, for floating point I believe that the assertion does not hold, and if the 'if(b == 0)' statement can be hit at all (no guaranteed exception on division by zero), it will not be optimized out, and the program is still very well defined. The compiler will not and can not make such assumptions.

I apologize if I'm making assumptions in my argument that renders the point moot, but for the question to make sense at all we have to treat the standard as "C++ + float", where float is some floating point specification. in all cases I've ever encountered it's more or less IEEE (though IEEE in itself has implementation-defined behavior).

Example, that I would assume is a completely valid well-formed program under C++ + IEEE:


#include <iostream>
#include <cfloat>
#include <cstdlib>

bool checkZeroDivZero(float f0, float f1) {
float result = f0 / f1;

if(f1 == 0.0f || f1 == -0.0f) {
if(isnan(result))
return true;
}

return false;
}

int main() {
for(int i = 0; i < 25; ++i) {
float f0 = static_cast<float>(rand() % 3);
float f1 = static_cast<float>(rand() % 3);

if(checkZeroDivZero(f0, f1))
std::cout << "0 / 0" << std::endl;
else
std::cout << "x / y" << std::endl;
}

return 0;
}



EDIT: Being well-formed in this case ofcourse does not mean being deterministic at run-time, as the result from floating point operations can change even from call to call to the same function, if the floating-point mode has been changed (such as D3D automatically changing it, as many of us have probably encountered).

Edited by Erik Rufelt

##### Share on other sites

I understand the logic, and for integers I'm agreeing without reservations.

I also accept that writing code like that might a bad idea, even for floats.

However, for floating point I believe that the assertion does not hold, and if the 'if(b == 0)' statement can be hit at all (no guaranteed exception on division by zero), it will not be optimized out, and the program is still very well defined. The compiler will not and can not make such assumptions.

I apologize if I'm making assumptions in my argument that renders the point moot, but for the question to make sense at all we have to treat the standard as "C++ + float", where float is some floating point specification. in all cases I've ever encountered it's more or less IEEE (though IEEE in itself has implementation-defined behavior).

This has nothing to do with the IEEE standard. This has to do with the C++ standard:

5.6 - 4

The binary / operator yields the quotient, and the binary % operator yields the remainder from the division
of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. For
integral operands the / operator yields the algebraic quotient with any fractional part discarded;80 if the
quotient a/b is representable in the type of the result, (a/b)*b + a%b is equal to a.

Edited by Washu

##### Share on other sites

I certainly don't disagree on your logic based on the C++ standard. Perhaps I'm arguing a completely different point, my apologies if I've derailed this...

What I still feel is that, since floating point arithmetic in itself is implementation-defined, any implementation that defines division by zero must also override that statement for unknown floating point values. A compiler that uses that statement to make assumptions that contradict its floating point implementation does not make sense.

It would have to specifically generate valid working code for division by zero, and in addition to that also treat the later conditional as if the previous valid working code was actually not valid. It's just not possible. I honestly think common complex programs would stop working if such assumptions were added to C++ compilers.

##### Share on other sites

Wasn't my explanation straight to the point and simple enough that you didn't have to go into this long discussion about standards and whatnot? :)

Btw, in that assembly example, it's not the FPU producing the negative qNaN. It's the SSE SIMD processor. If you disable SSE, the FPU might give you a positive qNaN.

I think SSE produces negative qNaNs because it's faster for it to just set all bits to 1 (although I haven't checked if this is the case) than to fiddle with individual bits to produce a positive qNaN.

##### Share on other sites

My point is that it was the processor doing it, not the compiler =P

And yeah, setting all bits was my first assumption, but the resulting value is 0xFFC00000 and not 0xFFFFFFFF, so using both ones and zeroes probably wasn't the issue. All the 1s are grouped at the top of the dword though, so maybe there's indeed something along those lines.

##### Share on other sites

The standard also says (5-4):

If during the evaluation of an expression, the result is not mathematically defined or not in the range of
representable values for its type, the behavior is undefined. [ Note: most existing implementations of C ++
ignore integer overflows. Treatment of division by zero, forming a remainder using a zero divisor, and all
floating point exceptions vary among machines, and is usually adjustable by a library function
. — end note ]

That note certainly leaves room to interpret that floating-point division by 0 might be an OK thing to do, depending on the compiler. If IEEE 754 specifies the behavior of some operation and my compiler follows that standard, I expect using that operation is A-OK.

##### Share on other sites

I certainly don't disagree on your logic based on the C++ standard. Perhaps I'm arguing a completely different point, my apologies if I've derailed this...
What I still feel is that, since floating point arithmetic in itself is implementation-defined, any implementation that defines division by zero must also override that statement for unknown floating point values. A compiler that uses that statement to make assumptions that contradict its floating point implementation does not make sense.

It would have to specifically generate valid working code for division by zero, and in addition to that also treat the later conditional as if the previous valid working code was actually not valid. It's just not possible. I honestly think common complex programs would stop working if such assumptions were added to C++ compilers.

You aren't writing IEEE code.

You're writing C++ code.

The compiler implements C++ code.

Therefore the compiler follows the C++ standards, which says it can do whatever it wants to divide by zero, even if the machine you're building for follows a IEEE standard that defines division by zero.

Edit: If division by zero was implementation defined in the C++ standard, then you'd have a point. Implementation defined behavior can be whatever the compiler wants, but the compiler has to document it and state exactly what happens (like following the IEEE floating point standard). But it's not - it's undefined behavior, so the compiler neither has to document the results, nor be consistent. This is usually because it opens up additional optimization opportunities. Edited by SmkViper

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627704
• Total Posts
2978716

• 21
• 14
• 12
• 10
• 12