All the numbers you listed are presented in a decimal representation, easy for humans to comprehend. But they were originally stored in binary, and so the decimal representation is almost definitely an approximation, not an exactly identical value.
The result is that when you add 2.475 and 29.3135 to get 31.7885, you're not actually doing the exact same math that the computer is doing. It is adding two numbers that are really close to 2.475 and 29.3135 and getting a result that is really close to 31.7885, but it is different enough to matter at the computational level. Similarly for the other set of numbers you presented. I wouldn't be surprised if the two additions were resulting in identical results in the binary representation, meaning that your less-than returns false no matter which way the comparison is done, and the consumer of the comparison is free to pick either one as the best. Or it's even possible that the decimal representations lose enough accuracy due to rounding errors that the second pair of numbers, when added in their binary form, actually are larger than the first pair. That is, if the first two numbers were both rounded up, and the second two numbers were both rounded down, then you might incorrectly expect the first two summed to be larger than the second two, when that's not actually the case.
In the end, it's usually best to structure your use of floating point numbers such that very minor differences of this sort don't really matter. Is it really a problem that the comparator thinks that #1 ranks higher than #2 in this case? The path finder ought to spit out nearly identically optimal paths either way.
And if you want to dive deep into some research and gain some very valuable knowledge on the subject, I'd recommend What Every Computer Scientist Should Know About Floating-Point Arithmetic and similar articles.
"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke