You get six decimal digits of precision, and you can count on erratic results to accumulate in the lower bits of precision. Those bits of accumulated error can spread quickly, as described by the paper linked to twice above.
Note that spanning processors is not necessary, you get different results even on the same computer.
You can have math operations in one location, and exactly the same math operations somewhere else, run the code and get different results. The compiler might notice something different even when the programmer doen't. The compiler might inline the function, which means the variables don't get truncated and passed as parameters. Or maybe quirky things like optimizations get done differently when the compiler generates code, perhaps moving the values from FPU registers out to memory and then loading them back, or register coloring means fewer values are kept in reigisters in one instance, or other tiny little things might happen.
There are many good documents explaining floating point numbers. That article on Gaffer on Games is a good one. So is "What Every Computer Scientist Should Know About Floating Point Numbers."
Always remember: Floating point is an approximation.
Do not rely on floating point when an approximation is unacceptable.
Do not rely on floating point when the error that accumulates in the approximation can grow to become significant.
Do not rely on floating point to give you an exact answer. They are inherently inaccurate to within 1/2 of the last bit value, and that is in the ideal case. In real-world cases they are frequently off by more than one bit in the last place, and they lose accuracy quickly when mixed with numbers of differing magnitude.
You can rely on floating point to drift, and for the last-bit error to accumulate in all subsequent operations.
You can rely on different results for the same values used in the same code. Even identical code that is compiled in different locations of a file can be optimized differently.
You can rely on functions that use floating point to be valid only within their limits. Don't use trig functions or other functions beyond their stated boundaries.
You can rely on floating point to propagate errors and unexpected answers, including NaN. If a function can possibly return NaN, INF, or other special results, handle them properly.
If you're insane and VERY good at low-level debugging, PM me and we can swap horror stories about how to do it with floats.
We had to do something like that on a non-game project I was on about 12 years ago. We realized the flaw early, then took the smart route and downloaded a software-based floating point implementation. No fancy FPU optimizations, no hardware floating point functions, but also no different sizes between FPU registers and memory sizes, no automatic truncation of the math formulas, no other processes switching FPU flags. We named the class "Real", for real numbers. To help ensure it didn't face any accidental optimizations, it was interface-only headers with the implementation details safely locked in a separate library with only integers in the interface.
It is also important to point out that both the IEEE floating point standards and the various processor vendors like Intel are both quick to point out certain operations are not guaranteed to be exact. IEEE FPU standards include the "inexact" flag, and trig operations like sine and cosine are well documented as having differences that are within the numerical tolerances but not identical.
You are correct that relying on floats for anything beyond an approximation is insane. Like 'lock that person in a rubber room and straightjacket' type of insane. I don't know if it is even theoretically possible to rely on floats to be identical across multiple machines since so many FPU operations take assorted undocumented shortcuts based on FPU state, but even if it is theoretically possible, it is something you should never ever do if you value your life.