Unix - Windows floating point problems
I'm trying to port a C module for my Windows game that was originally written and compiled on a Unix machine. The module does a bunch of arithmetic with double floating point variables. Most of the routines in the module work fine, but in a few of the routines I get floating point rounding "errors", or at least discrepancies with the Unix results.
In one routine, I have an array of doubles being manipulated and then cast element by element to an array of short ints. Here's the line:
short_buffer[col*rows+row] = (short int)((double)short_buffer[col*rows+row] * gain_factor[row] + offset);
Where gain_factors are doubles as is offset
Stepping through with the debugger, I see double values (the entire RHS) of, like, 20.000000 being cast to 19. I understand that the binary representation of the number might actually be something like 19.999999 (or whatever the nearest FLT_EPSILON is) and that the short int cast is truncating it to 19. However, these truncations aren't happening in the Unix version of the code. I want the Windows port to have exactly the same output, so these rounding errors are really killing me. :(
I've tried cycling through the four rounding modes in float.h, but I just get new discrepancies (as the entire array is rounded differently). Does anyone know anything about floating point representation/rounding/casting differences between Unix and Windows in C? I haven't been able to find much help by searching the web.
Quote:MB use ceil() and forget about representation of float/double?No, 99% of the array values are cast correctly. Putting in a ceil() would skew them in addittion to fixing the 1% that are truncated mysteriously.
The routine by design should floor/truncate the values. The problem is that Unix and Windows seem to be flooring/truncating the same double result to different integers.
Quote:Original post by piejacked
The routine by design should floor/truncate the values. The problem is that Unix and Windows seem to be flooring/truncating the same double result to different integers.
Mkay, what about floor() :)
And yes, maybe you are using something like gcc's -ffast-math switch?
No, actually. I'm using Visual C++ 6 to compile my Windows app, and gcc to compile on Unix. That's probably a good point. I'm not sure which compiler options in Visual C++ affect floating point data, though. I do know that I'm not using any optimizations.
How would flooring help? I mean, flooring is built in to the cast, what with the truncation and all, right? I have all positive values.
EDIT: Oh, and my Unix compiler options are -g, -w, -O6 (don't konw what this does), and -DSUNOS41 (don't konw what this does either). Not my makefile. I do know the creator of the makefile though, I'll ask him what the options mean.
How would flooring help? I mean, flooring is built in to the cast, what with the truncation and all, right? I have all positive values.
EDIT: Oh, and my Unix compiler options are -g, -w, -O6 (don't konw what this does), and -DSUNOS41 (don't konw what this does either). Not my makefile. I do know the creator of the makefile though, I'll ask him what the options mean.
I might try adding not 0.1, but a double epsilon to my result before casting, but that's kind of a hackish solution.
Apparently -O6 enables a bunch of optimizations (the same as -O2 or -O3), but not fast math or anything like that. Still don't know what -DSUNOS41 does. Maybe I should turn on the /O2 option in visual C.
Apparently -O6 enables a bunch of optimizations (the same as -O2 or -O3), but not fast math or anything like that. Still don't know what -DSUNOS41 does. Maybe I should turn on the /O2 option in visual C.
First of all, a good rule of thumb is that you should never expect any floating point calculations to be fully reproducable. Floating point is an approximation, use fixed point if you need predictable results.
In some cases it's possible to build deterministic systems anyway but the results will be fragile at best. So unless you wrote the calculation in assembler then the only relatively safe way is to use the same compiler with identical settings, preferably the same object file. But beware that a never version of the compiler, a different optimization setting, slightly restructured code or just about anything can potentially affect the results.
I hope you're running the x86 version though, otherwise it'll be next to impossible to get identical results.
In some cases it's possible to build deterministic systems anyway but the results will be fragile at best. So unless you wrote the calculation in assembler then the only relatively safe way is to use the same compiler with identical settings, preferably the same object file. But beware that a never version of the compiler, a different optimization setting, slightly restructured code or just about anything can potentially affect the results.
Quote:Original post by piejackedThat line is equivalent to defining the macro SUNOS41 with the preprocessor. Non-portable code often use such constants to adapt to specific enviroments (i.e. special case code for SunOS).
-DSUNOS41 does.
I hope you're running the x86 version though, otherwise it'll be next to impossible to get identical results.
In VC you can use the /Op (improve floating point consistency) compiler option to try to get the results you want at the cost of some perf. But in general as doynax says you can't really expect floating point calculcations to be identical down to the last bit when you start changing around compilers, optimization modes, fpu architectures, etc, etc.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement