Floating point precision varies
I am having some trouble with a floating point calculation in C++ that varies in precision depending on where it is called from.
In a DLL, I have a class with a static method:
class Test
{
public:
static void test()
{
double f = -0.6;
double s = 0.5;
double sum = s + f;
}
}
If I from an EXE (that is linking the DLL) call test(), I get the expected result of sum = -0.099999999999999978 (expected imprecision).
But if I from inside the DLL, for instance in the Test constructor, call test(), I get a different result: sum = -0.10000000149011612 (unacceptable imprecision for a double IMHO).
I am using VS 2008 Express, and there are no differences in the project settings of the EXE project and the DLL project - and just to ensure, the floating point model setting is "precise" (/fp:precise) on both.
So what is going on? - why this big difference? And is the imprecision to be expected after all?
The most likely culprit is not using D3DCREATE_FPU_PRESERVE when you initialize D3D.
Also see http://blogs.msdn.com/oldnewthing/archive/2008/07/03/8682463.aspx for some other ways it can get messed up.
Also see http://blogs.msdn.com/oldnewthing/archive/2008/07/03/8682463.aspx for some other ways it can get messed up.
Last year I spoke with one of my professors about this, and he said that in a number of cases it is still unknown exactly which way it will round. A double can only represent so many possible numbers (albeit a huge number of them), but when a number falls in between two possible representations, he results can get weird. Sometimes it is rounded up, sometimes down. Different depending upon the actual hardware in some cases as well.
Quote:Original post by Cluq
I am having some trouble with a floating point calculation in C++ that varies in precision depending on where it is called from.
Why is the variance in precision causing you trouble?
Quote:Original post by Zahlman
Why is the variance in precision causing you trouble?
Well, I think the variance is larger than what to be expected of a double. And such a large variance will inevitably accumulate. When one intently decide to use a double over a float, it must mean that the greater precision is necessary in that case. So when that precision is lost, it must cause trouble.
Quote:Original post by Lawtonfogle
Last year I spoke with one of my professors about this, and he said that in a number of cases it is still unknown exactly which way it will round. A double can only represent so many possible numbers (albeit a huge number of them), but when a number falls in between two possible representations, he results can get weird. Sometimes it is rounded up, sometimes down. Different depending upon the actual hardware in some cases as well.
I agree, that rounding is going to happen eventually. But in this case, I expected that the rounding of the double would be at a less significant digit.
Quote:Original post by Adam_42
The most likely culprit is not using D3DCREATE_FPU_PRESERVE when you initialize D3D.
Also see http://blogs.msdn.com/oldnewthing/archive/2008/07/03/8682463.aspx for some other ways it can get messed up.
D3DCREATE_FPU_PRESERVE did not do the trick, so it must lie in the linking to the DLL somehow, as your link describes. But I do not know how to fix the problem?
I am trying to recreate the problem in a smaller example (the problem currently only exist in my engine), but have not been successful - perhaps the problem still lies in DirectX, as this has not been used in my small test example, yet.
But any comments will still be greatly appreciated.
Your best bet is to change it back again after loading the DLL. To set the precision back to normal you call:
controlfp(_MCW_PC, _PC_64);
See http://msdn.microsoft.com/en-us/library/c9676k6h(VS.80).aspx for a list of all the other settings - you might want to set them all to known values just in case.
Also note that DLLs created using a recent compiler shouldn't change those settings at all.
controlfp(_MCW_PC, _PC_64);
See http://msdn.microsoft.com/en-us/library/c9676k6h(VS.80).aspx for a list of all the other settings - you might want to set them all to known values just in case.
Also note that DLLs created using a recent compiler shouldn't change those settings at all.
Well, butter me up, smack my a$$ and call me Judith - the problem has been fixed! I don't what I did, but now it works.
I played around yesterday with some project settings, but couldn't get it working. So the first thing I did this morning was a full rebuild of everything - and that apparently did it?
I hate when these things happen - I have no way of knowing what to do if it comes back. O'well, thanks for all the replies though, and I will have a look at the link above later - perhaps incoorporate the reset of the floating point model each time a DLL is loaded.
Have a nice day all.
I played around yesterday with some project settings, but couldn't get it working. So the first thing I did this morning was a full rebuild of everything - and that apparently did it?
I hate when these things happen - I have no way of knowing what to do if it comes back. O'well, thanks for all the replies though, and I will have a look at the link above later - perhaps incoorporate the reset of the floating point model each time a DLL is loaded.
Have a nice day all.
Just for future reference, you should know that there is even more variance in precision than just the float/double issue you mentioned.
Internally, the FP units can operate at even higher precision, and microops inside the core can take place at yet another precision. In these cases, the order of execution can make seemingly random changes to the lowest-order numbers in a floating point number.
The variance is still within the standards-required precision, so it isn't a bug.
When you are told there is a specific relative error, all you know is that your value will be within that error. Running the calculations in several different places in code is not guaranteed to give you the same result in each place, just the same result within the specific relative error.
Internally, the FP units can operate at even higher precision, and microops inside the core can take place at yet another precision. In these cases, the order of execution can make seemingly random changes to the lowest-order numbers in a floating point number.
The variance is still within the standards-required precision, so it isn't a bug.
When you are told there is a specific relative error, all you know is that your value will be within that error. Running the calculations in several different places in code is not guaranteed to give you the same result in each place, just the same result within the specific relative error.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement