Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualtanzanite7

Posted 16 October 2012 - 08:25 AM

No. Floating point arithmetic is not really deterministic.

I'd like to correct this statement on two parts:

First, not all algorithms need floating point.

Erm, if it is not using floating point then it is not floating point arithmetic. What on earth are you correcting? Some form of straw-man?

PS. OP specifically mentioned Perlin noise, whose implementations almost exclusively use floating point as fixed point is just too slow there (coincidentally, some time ago i explored options to drop floating point in regards with simplex/perlin noise, but gave it up as it is just way too slow). Of course, whether or not OP uses floating point does not invalidate what i said either way.

Second, floating point is fully deterministic within the restrictions of a particular instruction set architecture.
If all CPUs run the same code, they will all come to the EXACT same result.

... then sooner or later it will give different results if not the exact same binary is running.

Which is what i said :/. So, no complaints here either.

Once all that's done, you can use floating point just as deterministically as any other user-level instruction set architecture feature.

Your list omitted the primary source of inconsistencies (which is odd as i specifically mentioned it in my post): stack vs register usage. Compiler decides when and where stack is used when it runs out of full precision registers. Even with VC "precise" compiler flag - platform / OS limitations remain:
* Different OS's have different FPU precision defaults (Win x86 - varying, Win x64 - 64bit, Unix/linux usually - 80bit). Also, VC "precise" allows compiler to ignore _controlfp.
* Even with VC "precise" (which requires extra rounding code to enforce specific precision in addition to other more restrictive/slower semantics) - rounding behavior is CPU architecture specific and differs between x86, ia64 and amd64 (Where x86 and amd64 are specifically mentioned as "This particular semantic is subject to change." Ie. you are essentially in the "unspecified" land).

Playing with fire. Might help roast beef - or roast you. I am not stopping anyone - just warning "if exact repeatability is required - stay away from floating point".

#1tanzanite7

Posted 16 October 2012 - 08:23 AM

No. Floating point arithmetic is not really deterministic.

I'd like to correct this statement on two parts:

First, not all algorithms need floating point.

Erm, if it is not using floating point then it is not floating point arithmetic. What on earth are you correcting? Some form of straw-man?

PS. OP specifically mentioned Perlin noise, whose implementations almost exclusively use floating point as fixed point is just too slow there (coincidentally, some time ago i explored options to drop floating point in regards with simplex/perlin noise, but gave it up as it is just way too slow). Of course, whether or not OP uses floating point does not invalidate what i said either way.

Second, floating point is fully deterministic within the restrictions of a particular instruction set architecture.
If all CPUs run the same code, they will all come to the EXACT same result.

... then sooner or later it will give different results if not the exact same binary is running.

Which is what i said :/. So, no complaints here either.

Once all that's done, you can use floating point just as deterministically as any other user-level instruction set architecture feature.

Your list omitted the primary source of inconsistencies (which is odd as i specifically mentioned it in my post): stack vs register usage. Compiler decides when and where stack is used when it runs out of full precision registers. Even with VC "precise" compiler flag - platform / OS limitations remain:
* Different OS's have different FPU precision defaults (Win x86 - varying, Win x64 - 64bit, Unix/linux usually - 80bit). Also, VC "precise" allows compiler to ignore _controlfp.
* Even with VC "precise" (which requires extra rounding code to enforce specific precision in addition to other more restrictive/slower semantics) - rounding behavior is CPU architecture specific and differs between x86, ia64 and amd64 (Where x86 and amd64 are specifically mentioned as "This particular semantic is subject to change." Ie. you are essentially in the "unspecified" land).

Playing with fire. Might help roast beef - or roast you. I am not stopping anyone - just warning "if repeatability is required - stay away from floating point".

PARTNERS