floating point numbers and printf

Started by
3 comments, last by candelas 15 years, 6 months ago
I encountered something I found strange when trying to do a math problem. I don't believe it has anything to do with the math problem itself, but I'll show it just so you can see what I'm talking about: f(x) = sqrt(x^2 + x) - x Let u be the unit roundoff on a certain computer. Assume x is large and positive. What troubles occur when f is evaluated on this computer for values where x ~ 1/u? So, I tried running such a test on my computer. Here's the code:

int main () {
	int i;
	float f2, f3;
	float f = (float)(pow(2, 15));

	printf("%f\n", f);
	printbits(&f);

	f2 = sqrt((float)(pow(f, 2)+f));
	printf("%f\n", f2);
	printbits(&f2);

	f3 = f2 - f;
	printf("%f\n", f3);
	printbits(&f3);
	return 0;
}

void printbits(float* fl) {
	int j = 32;
	int i = *((int*)fl);
	for (; j > 0; --j) {
		if (j % 8 == 0)
			printf(" ");
		printf("%d", i / 2147483648);
		i = i << 1;
	}
	printf("\n");
}
But, here's the output: 32768.000000 01000111 00000000 00000000 00000000 32768.499996 01000111 00000000 00000000 10000000 0.500000 00111111 00000000 00000000 00000000 You can see when it prints f2, printf prints out the correct answer, while the printbits function prints out the binary equivalent of 32768.5. Even if the floating point number was actually more accurate for some reason and I'm missing those bits somewhere, it would only be above 32768.5, not below it. So the question is, how is printf doing that? Even my calculator stops working on x = 2^15, but when I tried with just 2^13, the output had the same result, and my calculator verified printf's result. This kind of drove me crazy for a little bit. I also tried writing printbit's code in the main function to avoid a stack call in case somehow that was causing lost precision, but I got the same result. I'm guessing there's something wrong with my interpretation of the bits, but I just don't see what could be wrong. Edit: By the way, I'm not asking anyone to solve the homework problem for me, just explain what's going on with printf.
Advertisement
Here's something that isn't common knowledge about printf() and other variadic functions: whenever you pass a float to them, you're actually passing a double. Add to this an also non-commonly known fact about floating point operations: many processors perform single precision floating point operations by taking the floats converting to doubles and doing the arithmetic on the doubles and then converting back to a float. Combine these two facts and what is probably happening is that the compiler is taking the intermediate value of the floating point operation as a double and passing it to printf() and then it converts that value to a float which your printbits() function operates on. At least that's what I'd guess is happening without looking at the assembly. To see for yourself what is actually happening I suggest getting your compiler to produce an assembly output and browse through that.
Thanks for the reply. I take it intel machines use rounding instead of chopping to convert from double to float then? I'll try taking a look at the assembly. It's not something I do a lot, though.
The x87 floating point unit supports four kinds of rounding: chop, up, down and near, which one it uses depends on the control state of the floating point processor. This can generally be changed on the fly by your program. For example, with MSVC you can use the _controlfp() function to modify the rounding mode.
Alright, well, I just tried switching the order of printf and printbits, and that made it so printf was printing the same number as printbits, and with your explanation, that's good enough for me. Thanks a lot.

I don't know why I didn't think to do that earlier, though.

This topic is closed to new replies.

Advertisement