Floating-point determinism is such a tangled problem that even some of the best experts in the field disagree on what is possible. I'm sure you've already done your Google homework and found the dozens of archived conversations on this subject, so I'll just distill things down to my personal experience:
You can get determinism on modern hardware if you stick to the same CPU architectures and bitness. You also need to write code very, very carefully to achieve this result.
Changing between 32-bit and 64-bit is a recipe for disaster, as is swapping between x87 FPU instructions and SIMD instruction sets such as SSE. Not carefully guiding the compiler is going to introduce subtle and incredibly hard-to-pinpoint desyncs.
You need to understand non-commutative arithmetic. You need to understand a fair bit of numerical analysis to help isolate sources of error. You need to know a lot of assembly-level instructions for whatever processor architecture you're choosing, and you need to be very, very comfortable writing and debugging assembly language. You also need to take a proactive stance against desyncs, such as interleaving your program with "sync checks" that hash all existing state of the program and compare it across running instances to ensure they all hash to the same value. You need to be exceedingly careful with what external libraries and code you use, as poorly-designed code can easily destroy determinism. Last but not least, you need to make sure to avoid known implementation-dependent functions and operations, such as estimated reciprocal, reciprocal square root, all your trigonometric transcendental functions, and other things that IEEE does not precisely specify.
In short: it is possible, but good fucking luck :-)
If you have the luxury, I'd strongly recommend an arbitrary-precision library or fixed-point arithmetic. It's far easier to get determinism from those models.
Edited by ApochPiQ, 08 January 2014 - 03:26 PM.
 I missed the ".NET" tag on this originally. My summary changes thusly: don't fucking bother. You can't even reliably control the machine control-word register in .NET code, which means you can't guarantee consistency of any significant nature. I'll leave the rest of this here for posterity; read it under the assumption that I'm talking about real systems languages like C, C++, Rust, etc.[/edit]