From: Keith MARSHALL <keith.marshall@to...>  20041006 13:04:15

> IMO, anyone doing serious math work with x86 should know about the > extra precision feature and code with it in mind. ... And anyone doing serious math work should also have sufficient grasp of error theory to realise that, with the limited precision math capabilities accorded by digital computers, the subtraction of two large but similar numbers is an inherently unsafe operation! In the OP's examples, the roundoff errors indicated are entirely insignificant, at the level of precision available. Best regards, Keith. 
From: Keith MARSHALL <keith.marshall@to...>  20041006 15:28:25

>>> IMO, anyone doing serious math work with x86 should know about the >>> extra precision feature and code with it in mind. ... >> >> And anyone doing serious math work should also have sufficient >> grasp of error theory to realise that, with the limited precision >> math capabilities accorded by digital computers, the subtraction of >> two large but similar numbers is an inherently unsafe operation! >> >> In the OP's examples, the roundoff errors indicated are entirely >> insignificant, at the level of precision available. > >I think the OP is testing the ``stability" of the FP system. I'm not > an expert on this matter, but IIRC some applications needs a well > defined behavior on the edge of precision (``well defined" as in > ``reproducible"). Nonlinear dynamics comes to mind. If you are not > aware of the 80 bit / 64 bit duality of the x86 FPU, an apparently > inocuous rearrangement of code on such an application could produce > very different results, due to introduction or suppression of 80 bit > > 64 bit conversions. That may well be the case. I was merely pointing out, if we test a mathematical premise on the basis of an unsound test case, then we shouldn't be too surprised if the results aren't what we expect. In the OP's test case, the subtraction of one very large number from another of similar magnitude discards all significance in the result. Error analysis tells us we can have no confidence in the accuracy of this result, so why should we care that differing implementations of the math routines give differing erroneous results? Mathematically, what *is* important is that the result is unreliable, regardless of which math routine implementation produced it. In my experience, as an engineer, it is all too common to see calculations based on physical measurements, where the accuracy of the measurements is not considered, and where unwarranted significance is attached to a difference between two experiments, yet error analysis would show that the calculated results do, in fact, agree, within the reproducible accuracy of the original measurements. Best regards, Keith. 
From: Igor MikolicTorreira <igormtnews@co...>  20041006 16:04:23

Keith MARSHALL wrote: >>>>IMO, anyone doing serious math work with x86 should know about the >>>>extra precision feature and code with it in mind. ... >>>> >>>> >>>And anyone doing serious math work should also have sufficient >>>grasp of error theory to realise that, with the limited precision >>>math capabilities accorded by digital computers, the subtraction of >>>two large but similar numbers is an inherently unsafe operation! >>> >>>In the OP's examples, the roundoff errors indicated are entirely >>>insignificant, at the level of precision available. >>> >>> >>I think the OP is testing the ``stability" of the FP system. I'm not >>an expert on this matter, but IIRC some applications needs a well >>defined behavior on the edge of precision (``well defined" as in >>``reproducible"). Nonlinear dynamics comes to mind. If you are not >>aware of the 80 bit / 64 bit duality of the x86 FPU, an apparently >>inocuous rearrangement of code on such an application could produce >>very different results, due to introduction or suppression of 80 bit >>> 64 bit conversions. >> >> > >That may well be the case. I was merely pointing out, if we test a >mathematical premise on the basis of an unsound test case, then we >shouldn't be too surprised if the results aren't what we expect. > >In the OP's test case, the subtraction of one very large number >from another of similar magnitude discards all significance in the >result. Error analysis tells us we can have no confidence in the >accuracy of this result, so why should we care that differing >implementations of the math routines give differing erroneous >results? Mathematically, what *is* important is that the result >is unreliable, regardless of which math routine implementation >produced it. > >In my experience, as an engineer, it is all too common to see >calculations based on physical measurements, where the accuracy >of the measurements is not considered, and where unwarranted >significance is attached to a difference between two experiments, >yet error analysis would show that the calculated results do, in >fact, agree, within the reproducible accuracy of the original >measurements. > >Best regards, >Keith. > > The issue is that floating point is supposed to follow specific rules (IEEE754) that ensure the reliability of certain key mathematical algorithms. The "excess" precision of the Intel FPU can lead to violations of those rules and thus the failure of certain algorithms. Of note, this is not a unique Intel issue: the PowerPC chip has fused multiply and add instructions that can also violate IEEE754 and can cause algorithms to fail. This is a very technical, perhaps even archane, subject. I highly recommend http://cch.loria.fr/documentation/IEEE754/ACM/goldberg.pdf as an introduction. Some of the options on gcc are intended to ensure appropriate IEEE754 behavior, even in the presence of "extra precision" in the Intel FPU (and gcc has analogous options for PowerPC codegen). As I understand it, the question is whether or not mingw gcc implements those switches correctly (I make no claim that it does or doesn't  I'm just trying to clarify the discussion). Igor 
From: Oscar Fuentes <ofv@wa...>  20041006 13:54:47

Keith MARSHALL <keith.marshall@...> writes: >> IMO, anyone doing serious math work with x86 should know about the >> extra precision feature and code with it in mind. ... > > And anyone doing serious math work should also have sufficient > grasp of error theory to realise that, with the limited precision > math capabilities accorded by digital computers, the subtraction of > two large but similar numbers is an inherently unsafe operation! > > In the OP's examples, the roundoff errors indicated are entirely > insignificant, at the level of precision available. I think the OP is testing the ``stability" of the FP system. I'm not an expert on this matter, but IIRC some applications needs a well defined behavior on the edge of precision (``well defined" as in ``reproducible"). Nonlinear dynamics comes to mind. If you are not aware of the 80 bit / 64 bit duality of the x86 FPU, an apparently inocuous rearrangement of code on such an application could produce very different results, due to introduction or suppression of 80 bit > 64 bit conversions.  Oscar 
From: Ian D. Gay <gay@sf...>  20041006 16:07:52

On Wed, Oct 06, 2004 at 01:56:34PM +0100, Keith MARSHALL wrote: > > IMO, anyone doing serious math work with x86 should know about the > > extra precision feature and code with it in mind. ... > > And anyone doing serious math work should also have sufficient > grasp of error theory to realise that, with the limited precision > math capabilities accorded by digital computers, the subtraction of > two large but similar numbers is an inherently unsafe operation! > > In the OP's examples, the roundoff errors indicated are entirely > insignificant, at the level of precision available. > > Best regards, > Keith. > Yes indeed. Also remember that d(e^x) = (e^x)dx, so the fractional error in exp(x) is equal to the absolute error in x. Thus, if x is near 1, and has a rounding error of 1e15, then exp(x) has a fractional error of about 1e15, which is fine. However, suppose x is large, as in OP's original post. If x = 10^10, and has 15 correct digits, then its absolute error is about 10^5. SO the fractional error in e^x is now about 10^5. i.e. 15 correct digits in x can only yield 5 correct digits in e^x. It only gets worse as x gets still bigger. This is due to the mathematical properties of the exponential function, and no amout of twiddling with floatingpoint implementation can make the situation better. 