Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project!

## Re: [Mingw-users] bug in strtod()

 Re: [Mingw-users] bug in strtod() From: Peter Rockett - 2013-12-13 10:39:39 ```On 12/12/13 16:49, Keith Marshall wrote: > On 12/12/13 01:15, sisyphus1@... wrote: >> Interestingly, this one is similar to 95e20 in that it can correctly be >> represented in 54 bits, and the problem is in the rounding to 53 bits. >> I don't know if mingw is affected by any of these - they all seem to involve >> a mantissa of more that 15 significant decimal digits. (One wonders how they >> might be greeted if presented to *this* list.) > The 64-bit IEEE 754 format provides: > > 1 sign bit; > 11 exponent bits; > 53 mantissa bits; (52 stored, 1 implied). > > The maximum number of significant decimal digits which that 53-bit > mantissa can *reliably* represent is given by: > > 53 log10( 2 ) = 15.95 > > While this is close to 16, you cannot really claim a fraction of a > decimal digit, so only 15 are actually available; (what the fraction > means is that, over part of the range of representable values, there may > be 16 significant digits available, but over the entire range, you can > claim only 15). > > The example which you cite: > >> And then there's >> http://www.exploringbinary.com/incorrectly-rounded-conversions-in-gcc-and-glibc/ >> >> which contains this example: >> [quote] >> 0.500000000000000166533453693773481063544750213623046875, >> which equals 2^-1 + 2^-53 + 2^-54, converts to this binary number: >> 0.100000000000000000000000000000000000000000000000000011 >> >> It has 54 significant bits, with bits 53 and 54 equal to 1. The correctly >> rounded value — in binary scientific notation — is >> 1.000000000000000000000000000000000000000000000000001 x 2^-1 >> >> gcc and glibc strtod() both compute the converted value as >> 1.0000000000000000000000000000000000000000000000000001 x 2^-1 >> which is one ULP below the correctly rounded value. >> [end quote] > offers a sample number with 54 significant decimal digits; this requires > a minimum of: > > 54 / log10( 2 ) = 179.38 binary digits > > (which we *must* round away from zero, to 180 bits), to represent it > exactly. To claim that it can be represented in fewer, (e.g. the 54 > suggested), is mathematically bogus; such a claim relies on a grossly > unsafe assumption -- that you know precisely what the state of the 126 > unrepresented bits must be. Obviously, this is sheer balderdash. > > Clearly, on input you may continue to generate mantissa bits beyond the > 53 which you can fit in the IEEE 754 64-bit representation, and then > round to 53. If you generate only 54, (as your example suggests), then > if the 54th bit is a zero, rounding entails simply discarding it. OTOH, > if it is a one, (again as in your example), you may round either away > from zero, or toward zero, and either could be regarded as round to > nearest, as both are equally near, (if you actually believe the accuracy > of you data to this extreme precision, in the first place). To break > such a tie, mathematicians normally advocate rounding away from zero, > (which is reasonable, since there is likely to be another one bit > somewhere in the following unrepresented bits, which would tend to bias > the rounding toward the next representable value away from zero. > However, there are some, (such as Donald Knuth), who advocate rounding > the tie to the nearest *even* representable value, on the grounds that, > statistically, around 50% of values will round toward zero, while the > remaining 50% round away from zero, and the resulting rounding errors > will tend to cancel, in calculations. FWIW: Section 4.3.3 of IEEE754 mandates the use of rounding ties to even for binary representations as the default. (Sadly, I have copy of this august document!) > Depending on which of these rounding rules a particular implementation > adopts, (round to nearest, with ties away from zero, or round to nearest > even value), you may see differences in the least significant bit; to > say that one method is definitively wrong, and the other correct, is > something of a stretch. Alternatively, it may be that the OP is somehow accessing two different implementations of strtod in his tests. Depending on exactly how they are written may determine how/where the rounding errors build up. I recall a number of years ago, a colleague trying to calculate the singular value decomposition (SVD) of some truculent matrices. (SVD is a complicated algorithm, for those who don't know it!) Only one SVD implementation, written by G. W. (Pete) Stewart [http://www.cs.umd.edu/~stewart/] gave consistently sensible results. My explanation is that, because he is a master of computational linear algebra, Stewart's implementation kept the rounding errors as good as they could be, whereas other implementers with a lesser grasp failed to do this . So we might be taking about seemingly minor differences in what/how intermediate results are calculated in the strtod implementations. > The reality is that you are arguing about a > miniscule difference in an arena where approximations prevail, and for > most uses, both approximations are equally good. So true! If you do any serious floating-point calculations and end-up with errors in the 1 ulp range you should be punching the air with hysterical delight! P. ...snip ```