You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(371) 
_{Oct}
(167) 
_{Nov}
(412) 
_{Dec}
(208) 

2001 
_{Jan}
(378) 
_{Feb}
(302) 
_{Mar}
(269) 
_{Apr}
(296) 
_{May}
(306) 
_{Jun}
(381) 
_{Jul}
(346) 
_{Aug}
(315) 
_{Sep}
(195) 
_{Oct}
(216) 
_{Nov}
(280) 
_{Dec}
(227) 
2002 
_{Jan}
(309) 
_{Feb}
(333) 
_{Mar}
(328) 
_{Apr}
(407) 
_{May}
(517) 
_{Jun}
(519) 
_{Jul}
(400) 
_{Aug}
(580) 
_{Sep}
(1273) 
_{Oct}
(984) 
_{Nov}
(683) 
_{Dec}
(538) 
2003 
_{Jan}
(578) 
_{Feb}
(454) 
_{Mar}
(312) 
_{Apr}
(366) 
_{May}
(505) 
_{Jun}
(431) 
_{Jul}
(415) 
_{Aug}
(374) 
_{Sep}
(470) 
_{Oct}
(578) 
_{Nov}
(372) 
_{Dec}
(309) 
2004 
_{Jan}
(308) 
_{Feb}
(247) 
_{Mar}
(372) 
_{Apr}
(413) 
_{May}
(333) 
_{Jun}
(323) 
_{Jul}
(269) 
_{Aug}
(239) 
_{Sep}
(469) 
_{Oct}
(383) 
_{Nov}
(400) 
_{Dec}
(332) 
2005 
_{Jan}
(411) 
_{Feb}
(363) 
_{Mar}
(346) 
_{Apr}
(316) 
_{May}
(275) 
_{Jun}
(248) 
_{Jul}
(396) 
_{Aug}
(396) 
_{Sep}
(279) 
_{Oct}
(340) 
_{Nov}
(319) 
_{Dec}
(218) 
2006 
_{Jan}
(317) 
_{Feb}
(263) 
_{Mar}
(304) 
_{Apr}
(296) 
_{May}
(209) 
_{Jun}
(349) 
_{Jul}
(246) 
_{Aug}
(198) 
_{Sep}
(174) 
_{Oct}
(138) 
_{Nov}
(201) 
_{Dec}
(270) 
2007 
_{Jan}
(223) 
_{Feb}
(182) 
_{Mar}
(350) 
_{Apr}
(350) 
_{May}
(259) 
_{Jun}
(221) 
_{Jul}
(299) 
_{Aug}
(465) 
_{Sep}
(356) 
_{Oct}
(265) 
_{Nov}
(417) 
_{Dec}
(225) 
2008 
_{Jan}
(421) 
_{Feb}
(327) 
_{Mar}
(219) 
_{Apr}
(389) 
_{May}
(375) 
_{Jun}
(262) 
_{Jul}
(215) 
_{Aug}
(289) 
_{Sep}
(257) 
_{Oct}
(383) 
_{Nov}
(237) 
_{Dec}
(209) 
2009 
_{Jan}
(232) 
_{Feb}
(327) 
_{Mar}
(306) 
_{Apr}
(251) 
_{May}
(146) 
_{Jun}
(247) 
_{Jul}
(302) 
_{Aug}
(252) 
_{Sep}
(263) 
_{Oct}
(376) 
_{Nov}
(270) 
_{Dec}
(244) 
2010 
_{Jan}
(225) 
_{Feb}
(184) 
_{Mar}
(300) 
_{Apr}
(290) 
_{May}
(275) 
_{Jun}
(535) 
_{Jul}
(192) 
_{Aug}
(237) 
_{Sep}
(304) 
_{Oct}
(142) 
_{Nov}
(384) 
_{Dec}
(186) 
2011 
_{Jan}
(305) 
_{Feb}
(337) 
_{Mar}
(331) 
_{Apr}
(318) 
_{May}
(306) 
_{Jun}
(299) 
_{Jul}
(205) 
_{Aug}
(271) 
_{Sep}
(232) 
_{Oct}
(179) 
_{Nov}
(252) 
_{Dec}
(216) 
2012 
_{Jan}
(195) 
_{Feb}
(268) 
_{Mar}
(142) 
_{Apr}
(226) 
_{May}
(203) 
_{Jun}
(132) 
_{Jul}
(211) 
_{Aug}
(429) 
_{Sep}
(289) 
_{Oct}
(291) 
_{Nov}
(182) 
_{Dec}
(188) 
2013 
_{Jan}
(205) 
_{Feb}
(259) 
_{Mar}
(224) 
_{Apr}
(125) 
_{May}
(295) 
_{Jun}
(181) 
_{Jul}
(209) 
_{Aug}
(167) 
_{Sep}
(330) 
_{Oct}
(212) 
_{Nov}
(95) 
_{Dec}
(114) 
2014 
_{Jan}
(40) 
_{Feb}
(63) 
_{Mar}
(62) 
_{Apr}
(65) 
_{May}
(82) 
_{Jun}
(105) 
_{Jul}
(56) 
_{Aug}
(175) 
_{Sep}
(79) 
_{Oct}
(49) 
_{Nov}
(51) 
_{Dec}
(47) 
2015 
_{Jan}
(26) 
_{Feb}
(69) 
_{Mar}
(82) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1
(19) 
2
(16) 
3
(17) 
4
(13) 
5
(15) 
6
(22) 
7
(2) 
8
(5) 
9
(13) 
10
(4) 
11
(14) 
12
(16) 
13
(4) 
14
(3) 
15
(9) 
16
(8) 
17
(20) 
18
(1) 
19
(4) 
20
(4) 
21
(5) 
22
(6) 
23
(14) 
24
(3) 
25
(6) 
26
(9) 
27
(12) 
28
(1) 
29
(6) 
30
(14) 
31
(21) 




From: K. Frank <kfrank29.c@gm...>  20110502 21:13:38

Hi Peter! On Mon, May 2, 2011 at 3:42 PM, Peter Rockett <p.rockett@...> wrote: > Forgive the toppost but this whole disagreement seems concerned with > machine numbers, I like your term "machine numbers." I hadn't heard it before. I believe this is what I've been talking about. > that small set of floats that can be represented exactly: > the trailing zeroes beyond the leastsignificant mantissa bit are indeed > zeroes Yes, I'm talking about unusual use cases where the calculations work with that special set of real numbers that can be represented exactly by floatingpoint numbers. (Unless I misunderstand, these are your "machine numbers.") > Machine numbers have a important role in numerical algebra since > they are zeroerror numbers and hence probe the intrinsic accuracy of a > routine rather than routine + error of input quantity convolved together in > some way. But as far as I am aware, machine numbers are only ever used for > testing numerical routines  since you cannot say that a general float is a > machine number or just the nearest approximation to something nearby on the > real number line, I am puzzled by the points annotated at the bottom of this > post. Or do these refer only to testing? Testing of numerical routines  that's a good point. This is a use case I wasn't aware of before, but it certainly makes sense. > P. > > On 02/05/2011 18:17, K. Frank wrote: > > Hello Charles and Keith! > On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: > On 5/2/2011 2:34 AM, Keith Marshall wrote: > On 01/05/11 14:36, K. Frank wrote: > [snip] > ... >>> but since you don't have those bits available, you have no technically >>> defensible basis, in the general case, for making such an assumption; >>> your argument is flawed, and IMO technically invalid. >> >> On the contrary, even though the bits are not available (i.e. are not stored >> in memory), it is still possible in my specific case (not your general case) >> to know (not assume) that these bits are zero. My technically defensible >> basis for knowing this is that my calculation is structured so that these >> bits being zero is an invariant of the calculation. > > This puzzles me! Do you mean testing of numerical routines? Can't see how > general calculations can make use of machine numbers... I agree that general calculations don't make use of machine numbers. My point is that there are certain specialized calculations that do. You mentioned a use case new to me  testing of numerical routines. Some others are: Interval arithmetic. in which a floatingpoint number is an exact representation of a real number that is a strict upper (or lower) bound to a quantity being studied. As an optimization in arbitraryprecision arithmetic: There was a symbolic algebra package (I believe that it was Mathematica, but it may have been one of its predecessor) that used floatingpoint numbers to exactly represent real numbers, and then cut over to a fullblown arbitrary precision package when the result of a calculation required it. (As I recall, this optimization was only used in the first few version of the package. I believe that it was never bug free, presumably because it was too hard to detect the necessary cutover point correctly while still achieving useful gains in performance.) The use case I dealt with directly was that of implementing a randomnumber generator in the stack of an 80287 FPU. (I don't remember for sure, but I believe it was a linearcongruential generator.) The mathematics that gives you good quality random numbers requires that the calculations be carried out exactly, not just to some finite precision, that is, that they be carried out solely within the set of machine numbers. This is all I've been saying  that there are relatively rare, but legitimate use cases where you use floatingpoint numbers as machine numbers, that is to represent specific real numbers exactly. It's hardly a big deal, but in these cases it's nice to be able to print these numbers out exactly in decimal. Best. K. Frank 
From: Vicent Segui Pascual <vseguip@pi...>  20110502 20:20:06

Enviado desde mi HTC  Reply message  De: mingwusersrequest@... Para: <mingwusers@...> Asunto: MinGWusers Digest, Vol 60, Issue 5 Fecha: lun., may. 2, 2011 19:17 Send MinGWusers mailing list submissions to mingwusers@... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/mingwusers or, via email, send a message with subject or body 'help' to mingwusersrequest@... You can reach the person managing the list at mingwusersowner@... When replying, please edit your Subject line so it is more specific than "Re: Contents of MinGWusers digest..." Today's Topics: 1. Re: [Mingww64public] Math library discrepancies that surprised me. (Keith Marshall) 2. Re: [Mingww64public] Math library discrepancies that surprised me. (Charles Wilson) 3. Re: [Mingww64public] Math library discrepancies that surprised me. (Keith Marshall) 4. Re: [Mingww64public] Math library discrepancies that surprised me. (K. Frank)  Message: 1 Date: Mon, 02 May 2011 07:34:20 +0100 From: Keith Marshall <keithmarshall@...> Subject: Re: [Mingwusers] [Mingww64public] Math library discrepancies that surprised me. To: mingwusers@... MessageID: <4DBE506C.4070803@...> ContentType: text/plain; charset=ISO88591 On 01/05/11 14:36, K. Frank wrote: > Hello Keith! > > On Sun, May 1, 2011 at 3:43 AM, Keith Marshall > <keithmarshall@...> wrote: >> On 30/04/11 23:14, Kai Tietz wrote: >>> long double (80bit): >>> digits for mantissa:64 >>> ... >> >> Alas, this sort of mathematical ineptitude seems to be all too common >> amongst the programming fraternity. It isn't helped by a flaw in the >> gdtoa implementation common on BSD and GNU/Linux, (and also adopted by >> Danny into mingwrt); it fails to implement the Steele and White stop >> condition correctly, continuing to spew out garbage beyond the last bit >> of available precision, creating an illusion of better accuracy than is >> really achievable. > > If I understand your point here, your complaint is that gdtoa will happily > generate more that twenty (or nineteen, if you will) decimal digits for a > gcc long double. (I won't speak to the technical correctness of gdtoa; > for all I know, it has bugs.) I think your point is that those "extra" decimal > digits represent false (and misleading) precision. > > (Sorry if I've misinterpreted your comment.) My complaint is that, long after the last bit of precision has been interpreted, gdtoa (which is at the heart of printf()'s floating point output formatting) will continue to spew out extra decimal digits, based solely on the residual remainder from the preceding digit conversion, with an arbitrary number of extra zero bits appended. (Thus, gdtoa makes the unjustified and technically invalid assumption that the known bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply by appending zero bits in place of the less significant unknowns). > I don't agree with this. Well, you are entitled to your own opinion; we may agree to disagree. > In most cases, it is not helpful to print out a long double to more > than twenty decimal place, but sometimes it is. The point is that it > is not the case that floatingpoint numbers represent all real > numbers inexactly; rather, they represent only a subset of real > numbers exactly. If I happen to be representing a real number exactly > with a long double, I might wish to print it out with lots (more than > twenty) decimal digits. Such a use case is admittedly rare, but not > illegitimate. This may be acceptable, provided you understand that those additional digits are of questionable accuracy. When you attempt to claim an accuracy which simply isn't available, then I would consider that it is most definitely illegitimate. > Let's say that I have a floatingpoint number with ten binary digits, so > it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). > I can use such a number to represent 1 + 2^10 exactly. Well, yes, you can if we allow you an implied eleventh bit, as most significant, normalised to 1; thus your mantissa bit pattern becomes: 10000000001B > I can print this number out exactly in decimal using ten digits after > the decimal point: 1.0009765625. That's legitimate, and potentially a > good thing. Sorry, but I couldn't disagree more. See, here you are falling into the gdtoa trap. You have an effective bit precision of eleven, which gives you: 11 / log2(10) = 3.311 decimal digits (i.e. 3 full digits) > If I limit myself to three digits after the decimal point I get 1.001 > (rounding up). You can't even claim that. Once again, you are confusing decimal places and significant digits. You may claim AT MOST 3 significant digits, and those are 1.00; (significance begins at the leftmost nonzero digit overall, not just after the radix point). > Sure, this is not a common use case, but I would prefer that the software > let me do this, and leave it up to me to know what I'm doing. I would prefer that software didn't try to claim the indefensible. Your 1.0009765625 example represents 11 significant decimal digits of precision. To represent that in binary, you need a minimum of: 11 * log2(10) = 36.54 bits (which we must round up to 37 bits). While a mantissa of 10000000001B MAY equate exactly to your example value of 1 + 2^10, it is guaranteed to be exact to 11 decimal digits of significance only if its normalised 37bit representation is: 1000000000100000000000000000000000000B Since you have only 11 bits of guaranteed binary precision available, you are making a sweeping assumption about those extra 26 bits; (they must ALL be zero). If you know for certain that this is so, then okay, but since you don't have those bits available, you have no technically defensible basis, in the general case, for making such an assumption; your argument is flawed, and IMO technically invalid.  Regards, Keith.  Message: 2 Date: Mon, 02 May 2011 11:30:52 0400 From: Charles Wilson <cwilso11@...> Subject: Re: [Mingwusers] [Mingww64public] Math library discrepancies that surprised me. To: MinGW Users List <mingwusers@...> MessageID: <4DBECE2C.1050603@...> ContentType: text/plain; charset=ISO88591 On 5/2/2011 2:34 AM, Keith Marshall wrote: > On 01/05/11 14:36, K. Frank wrote: >> [snip] > My complaint is that, long after the last bit of precision has been > interpreted, gdtoa (which is at the heart of printf()'s floating point > output formatting) will continue to spew out extra decimal digits, based > solely on the residual remainder from the preceding digit conversion, > with an arbitrary number of extra zero bits appended. (Thus, gdtoa > makes the unjustified and technically invalid assumption that the known > bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply > by appending zero bits in place of the less significant unknowns). > >> I don't agree with this. > > Well, you are entitled to your own opinion; we may agree to disagree. > >> In most cases, it is not helpful to print out a long double to more >> than twenty decimal place, but sometimes it is. The point is that it >> is not the case that floatingpoint numbers represent all real >> numbers inexactly; rather, they represent only a subset of real >> numbers exactly. But the problem is, if I send you a floating point number that represents the specific real number which I have in mind, exactly, YOU don't know that. All you have is a particular floating point number that represents the range [valueulps/2, value+ulps/w). You have no idea that I actually INTENDED to communicate EXACTLY "value" to you. Ditto for the results of a long computation. I get back as the output some f.p. rep  I don't *know* that the actual result of the computation is exactly the value of that rep. All I know is the result is not representable with more accuracy by any OTHER f.p. rep with the same precision. >> If I happen to be representing a real number exactly >> with a long double, I might wish to print it out with lots (more than >> twenty) decimal digits. Such a use case is admittedly rare, but not >> illegitimate. No, that's always illegitimate (i.e. misleading). Imagine I wrote a scientific paper concerning an experiment with 17 trials, and my individual measurements had a precision of 3 sig. digits (all of the same order of magnitude). I can't say that the mean result had 20 sig. digits simply because I can't represent the result of dividing by 17 exactly using only 3 sig. digits. It's not accurate to extend the precision of the sum by appending zeros, simply so that I get more digits of "apparent precision" after dividing by 17. My paper would be rejected  and rightly so. > This may be acceptable, provided you understand that those additional > digits are of questionable accuracy. When you attempt to claim an > accuracy which simply isn't available, then I would consider that it is > most definitely illegitimate. > >> Let's say that I have a floatingpoint number with ten binary digits, so >> it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). >> I can use such a number to represent 1 + 2^10 exactly. > > Well, yes, you can if we allow you an implied eleventh bit, as most > significant, normalised to 1; thus your mantissa bit pattern becomes: > > 10000000001B > >> I can print this number out exactly in decimal using ten digits after >> the decimal point: 1.0009765625. That's legitimate, and potentially a >> good thing. But 10000000001B does NOT mean "1 + 2^10". It means "with the limited precision I have, I can't represent the actual value of real number R more accurately with any other bit pattern than this one". > Sorry, but I couldn't disagree more. See, here you are falling into the > gdtoa trap. You have an effective bit precision of eleven, which gives you: > > 11 / log2(10) = 3.311 decimal digits (i.e. 3 full digits) > >> If I limit myself to three digits after the decimal point I get 1.001 >> (rounding up). > > You can't even claim that. Once again, you are confusing decimal places > and significant digits. You may claim AT MOST 3 significant digits, and > those are 1.00; (significance begins at the leftmost nonzero digit > overall, not just after the radix point). See, here's the problem: 1.0009765625 means: I can distinguish between the following three numbers: a) 1.0009765624 b) 1.0009765625 c) 1.0009765626 and the real number R is closer to (b) than to (a) or (c). But with 10 bits, you CAN'T distinguish between those three numbers: the same 10 bit pattern must be used to represent all three. In fact, the best you can do with 10 bits is distinguish between the following three reps: (a) 0.999 (b) 1.00 (c) 1.01 (Note, because of the normalization shift between (a) and (b/c), the accuracy *appears* to change in magnitude by a factor of 10, but that's simply an artifact of the base10 representation  in base two the normalization shift effect is only a factor of 2, not 10). Once again, the real number R is closer to (b) than to (a) or (c), and that's the best you can do with 10 bits (3 significant decimal digits). >> Sure, this is not a common use case, but I would prefer that the software >> let me do this, and leave it up to me to know what I'm doing. > > I would prefer that software didn't try to claim the indefensible. Your > 1.0009765625 example represents 11 significant decimal digits of > precision. To represent that in binary, you need a minimum of: > > 11 * log2(10) = 36.54 bits > > (which we must round up to 37 bits). While a mantissa of 10000000001B > MAY equate exactly to your example value of 1 + 2^10, it is guaranteed > to be exact to 11 decimal digits of significance only if its normalised > 37bit representation is: > > 1000000000100000000000000000000000000B > > Since you have only 11 bits of guaranteed binary precision available, > you are making a sweeping assumption about those extra 26 bits; (they > must ALL be zero). If you know for certain that this is so, then okay, > but since you don't have those bits available, you have no technically > defensible basis, in the general case, for making such an assumption; > your argument is flawed, and IMO technically invalid. Agree. But this whole discussion is rather beside the point I think, which started with a real discrepancy in the actual bit pattern produced by sqrt(x) and pow(x, 0.5). e.g. > So, we would like sqrt (x) and pow (x, 0.5) to agree. We would like > compiletime and runtime evaluations to agree. We would like > crosscompilers and native compilers to agree. This is a binary bit pattern issue, not a gdtoa base 10 conversion issue.  Chuck  Message: 3 Date: Mon, 02 May 2011 17:00:32 +0100 From: Keith Marshall <keithmarshall@...> Subject: Re: [Mingwusers] [Mingww64public] Math library discrepancies that surprised me. To: mingwusers@... MessageID: <4DBED520.6080804@...> ContentType: text/plain; charset=ISO88591 On 02/05/11 16:30, Charles Wilson wrote: >> So, we would like sqrt (x) and pow (x, 0.5) to agree. We would >> like compiletime and runtime evaluations to agree. We would like >> crosscompilers and native compilers to agree. > > This is a binary bit pattern issue, not a gdtoa base 10 conversion > issue. Absolutely agree. Others side tracked the issue into the realms of representable precision  a DIFFERENT issue around which confusion appears to abound. This confusion is compounded by the gdtoa stopping anomaly. Just wanted to clarify that. Enough said.  Regards, Keith.  Message: 4 Date: Mon, 2 May 2011 13:17:28 0400 From: "K. Frank" <kfrank29.c@...> Subject: Re: [Mingwusers] [Mingww64public] Math library discrepancies that surprised me. To: MinGW Users List <mingwusers@...> MessageID: <BANLkTi=aMb+QqnLAG_FGPm0J8t6Zp3gcw@...> ContentType: text/plain; charset=ISO88591 Hello Charles and Keith! On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: > On 5/2/2011 2:34 AM, Keith Marshall wrote: >> On 01/05/11 14:36, K. Frank wrote: >>> [snip] >> My complaint is that, long after the last bit of precision has been >> interpreted, gdtoa (which is at the heart of printf()'s floating point >> output formatting) will continue to spew out extra decimal digits, based >> solely on the residual remainder from the preceding digit conversion, >> with an arbitrary number of extra zero bits appended. ?(Thus, gdtoa >> makes the unjustified and technically invalid assumption that the known >> bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply >> by appending zero bits in place of the less significant unknowns). >> >>> I don't agree with this. >> >> Well, you are entitled to your own opinion; we may agree to disagree. >> >>> In most cases, it is not helpful to print out a long double to more >>> than twenty decimal place, but sometimes it is. The point is that it >>> is not the case that floatingpoint numbers represent all real >>> numbers inexactly; rather, they represent only a subset of real >>> numbers exactly. > > But the problem is, if I send you a floating point number that > represents the specific real number which I have in mind, exactly, YOU > don't know that. True. But this does not apply to all use cases. If I have generated a floatingpoint number, and I happen to know that it was generated in a way that it exactly represents a specific real number, I am fully allowed to make use of this information. (If you send me a floatingpoint number, and don't provide me the guarantee that it is being used to represent a specific real number exactly  and this indeed is the most common, but not the only use case  then your comments are correct.) > All you have is a particular floating point number > that represents the range [valueulps/2, value+ulps/w). You have no idea > that I actually INTENDED to communicate EXACTLY "value" to you. Yes, in your use case you did not intend to communicate an exact value to me. Therefore I should not imagine the floatingpoint number you sent me to be exact. And I shouldn't use gdtoa to print it out to fifty decimal places. > ... >>> If I happen to be representing a real number exactly >>> with a long double, I might wish to print it out with lots (more than >>> twenty) decimal digits. Such a use case is admittedly rare, but not >>> illegitimate. > > No, that's always illegitimate (i.e. misleading). It's the "always" I disagree with. Your comments are correct for the use cases you describe, but you are incorrectly implying other use cases don't exist. > ... >> ... >>> Let's say that I have a floatingpoint number with ten binary digits, so >>> it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). >>> I can use such a number to represent 1 + 2^10 exactly. >> >> Well, yes, you can if we allow you an implied eleventh bit, as most >> significant, normalised to 1; thus your mantissa bit pattern becomes: >> >> ? 10000000001B >> >>> I can print this number out exactly in decimal using ten digits after >>> the decimal point: 1.0009765625. That's legitimate, and potentially a >>> good thing. > > But 10000000001B does NOT mean "1 + 2^10". In some specialized use cases, it means precisely this. In my program (assuming it's written correctly, etc.) the value means precisely what my program deems it to mean. > It means "with the limited > precision I have, I can't represent the actual value of real number R > more accurately with any other bit pattern than this one". In the common use cases, this is a very good way to understand floatingpoint numbers. But, to reiterate my point, there are use cases where floatingpoint numbers are used to exactly represent a special subset of the real numbers. These floatingpoint numbers mean something exact, and therefore mean something different than your phrase, "with the limited precision I have." > ... >>> Sure, this is not a common use case, but I would prefer that the software >>> let me do this, and leave it up to me to know what I'm doing. >> >> I would prefer that software didn't try to claim the indefensible. >> ... I have use floatingpoint numbers and FPU's in situations where the correctness of the calculations depended upon the floatingpoint numbers representing specific real numbers exactly. I had to be very careful doing this to get it right, but I was careful and the calculations were correct. What more can I say? Such use cases exist. I have found it convenient (although hardly essential) to sometimes print those numbers out in decimal (because decimal representations are more familiar to me), and it was therefore helpful that my formatting routine didn't prevent me from doing this exactly. This is exactly my argument for why the behavior of gdtoa that you object to is good in certain specialized instances. This may be a use case that you have never encountered, but nevertheless, there it is. >> ... >> >> ? 1000000000100000000000000000000000000B >> >> Since you have only 11 bits of guaranteed binary precision available, >> you are making a sweeping assumption about those extra 26 bits; (they >> must ALL be zero). ?If you know for certain that this is so, then okay, That's the point. There are specialized cases where I know with mathematically provable certainty that those extra bits are all zero. Not because they're stored in memory  they're not  but because my calculation was specifically structured so that they had to be. >> but since you don't have those bits available, you have no technically >> defensible basis, in the general case, for making such an assumption; >> your argument is flawed, and IMO technically invalid. On the contrary, even though the bits are not available (i.e. are not stored in memory), it is still possible in my specific case (not your general case) to know (not assume) that these bits are zero. My technically defensible basis for knowing this is that my calculation is structured so that these bits being zero is an invariant of the calculation. Look, I've never argued or implied that this sort of use case is in any way common, and I stated explicitly at the beginning of the discussion that this sort of use case is atypical. Your comments are quite correct for the way floatingpoint numbers are used the vast majority of the time. I'm just pointing out that there are some unusual use cases, and that I think it's good that gdtoa supports them. > ... > Chuck Best regards. K. Frank   WhatsUp Gold  Download Free Network Management Software The most intuitive, comprehensive, and costeffective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgoldsd  _______________________________________________ MinGWusers mailing list MinGWusers@... You may change your MinGW Account Options or unsubscribe at: https://lists.sourceforge.net/lists/listinfo/mingwusers End of MinGWusers Digest, Vol 60, Issue 5 ****************************************** !DSPAM:4dbeecd4181865675626326! 
From: Keith Marshall <keithmarshall@us...>  20110502 20:07:10

On 02/05/11 20:25, Peter Rockett wrote: > On 02/05/2011 17:00, Keith Marshall wrote: >> > On 02/05/11 16:30, Charles Wilson wrote: >>>> >>> So, we would like sqrt (x) and pow (x, 0.5) to agree. > The point below about offtopic diversions into precision is valid but > can I reiterate, there is no good reason why x^(0.5) calculated two > different ways should agree since the two different routes will > accumulate different rounding errors. Adding a special case to force > agreement for x^n when n = 0.5 will introduce a discontinuity into the > pow() function and is a very bad idea. The value of returned by pow(x, > 0.5) might be less accurate compared to sqrt(x) but that is part of the > fundamental issue with using floatingpoint numbers: they are only ever > approximations. You have to take that fact on board. I also completely agree on this point. Rest assured that *I* have absolutely no intention of introducing any such special case kludge into MinGW's implementation of pow(). > Consistent values for sqrt() and pow() across platforms is another issue... It is; I may consider an alternative implementation of pow(), if anyone would care to contribute one, *provided* (a) it is suitably licensed, (and evidence of this is also provided), (b) it is accompanied by a mathematically correct proof that it is a better quality implementation than we have at present, and (c) it is *not* festooned with kludges such as described above.  Regards, Keith. 
From: Kai Tietz <ktietz70@go...>  20110502 20:05:44

2011/5/2 Peter Rockett <p.rockett@...>: > On 02/05/2011 17:00, Keith Marshall wrote: >> On 02/05/11 16:30, Charles Wilson wrote: >>>> So, we would like sqrt (x) and pow (x, 0.5) to agree. > > The point below about offtopic diversions into precision is valid but > can I reiterate, there is no good reason why x^(0.5) calculated two > different ways should agree since the two different routes will > accumulate different rounding errors. Adding a special case to force > agreement for x^n when n = 0.5 will introduce a discontinuity into the > pow() function and is a very bad idea. The value of returned by pow(x, > 0.5) might be less accurate compared to sqrt(x) but that is part of the > fundamental issue with using floatingpoint numbers: they are only ever > approximations. You have to take that fact on board. Correct, the ways of calculation of pow (x, 0.5) and sqrt (x) are different. Nevertheless is it an issue in gcc. As if you are writing pow (1.1, 0.5) or sqrt (1.1)  which means gcc optimize it via gmp to a precalculated result  it has identical values. But by using mathlibrary results in a  expectable  difference in result. So IMHO pow should special case here to provide same result as pow does for y = 0.5. > Consistent values for sqrt() and pow() across platforms is another issue... Well, this is more an issue of used floatingpoint format and is for sure a different subject. To avoid platform dependencies, you would need to do floatingpoint calculations always in a fixed variant, which means on architectures with different floatingpoint formats, you need to do calculation in software to get binary identical result (as (in)accurate they however are) Kai 
From: Peter Rockett <p.rockett@sh...>  20110502 19:42:09

<!DOCTYPE HTML PUBLIC "//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO88591" httpequiv="ContentType"> </head> <body text="#000000" bgcolor="#ffffff"> Forgive the toppost but this whole disagreement seems concerned with <i>machine numbers</i>, that small set of floats that can be represented exactly: the trailing zeroes beyond the leastsignificant mantissa bit are indeed zeroes. Machine numbers have a important role in numerical algebra since they are zeroerror numbers and hence probe the intrinsic accuracy of a routine rather than routine + error of input quantity convolved together in some way. But as far as I am aware, machine numbers are only ever used for testing numerical routines  since you cannot say that a general float is a machine number or just the nearest approximation to something nearby on the real number line, I am puzzled by the points annotated at the bottom of this post. Or do these refer only to testing?<br> <br> P.<br> <br> <br> On 02/05/2011 18:17, K. Frank wrote: <blockquote cite="mid:BANLkTi=aMb+QqnLAG_FGPm0J8t6Zp3gcw@..." type="cite"> <pre wrap="">Hello Charles and Keith! On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: </pre> <blockquote type="cite"> <pre wrap="">On 5/2/2011 2:34 AM, Keith Marshall wrote: </pre> <blockquote type="cite"> <pre wrap="">On 01/05/11 14:36, K. Frank wrote: </pre> <blockquote type="cite"> <pre wrap="">[snip] </pre> </blockquote> <pre wrap="">My complaint is that, long after the last bit of precision has been interpreted, gdtoa (which is at the heart of printf()'s floating point output formatting) will continue to spew out extra decimal digits, based solely on the residual remainder from the preceding digit conversion, with an arbitrary number of extra zero bits appended. (Thus, gdtoa makes the unjustified and technically invalid assumption that the known bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply by appending zero bits in place of the less significant unknowns). </pre> <blockquote type="cite"> <pre wrap="">I don't agree with this. </pre> </blockquote> <pre wrap=""> Well, you are entitled to your own opinion; we may agree to disagree. </pre> <blockquote type="cite"> <pre wrap="">In most cases, it is not helpful to print out a long double to more than twenty decimal place, but sometimes it is. The point is that it is not the case that floatingpoint numbers represent all real numbers inexactly; rather, they represent only a subset of real numbers exactly. </pre> </blockquote> </blockquote> <pre wrap=""> But the problem is, if I send you a floating point number that represents the specific real number which I have in mind, exactly, YOU don't know that. </pre> </blockquote> <pre wrap=""> True. But this does not apply to all use cases. If I have generated a floatingpoint number, and I happen to know that it was generated in a way that it exactly represents a specific real number, I am fully allowed to make use of this information. (If you send me a floatingpoint number, and don't provide me the guarantee that it is being used to represent a specific real number exactly  and this indeed is the most common, but not the only use case  then your comments are correct.) </pre> <blockquote type="cite"> <pre wrap="">All you have is a particular floating point number that represents the range [valueulps/2, value+ulps/w). You have no idea that I actually INTENDED to communicate EXACTLY "value" to you. </pre> </blockquote> <pre wrap=""> Yes, in your use case you did not intend to communicate an exact value to me. Therefore I should not imagine the floatingpoint number you sent me to be exact. And I shouldn't use gdtoa to print it out to fifty decimal places. </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">If I happen to be representing a real number exactly with a long double, I might wish to print it out with lots (more than twenty) decimal digits. Such a use case is admittedly rare, but not illegitimate. </pre> </blockquote> </blockquote> <pre wrap=""> No, that's always illegitimate (i.e. misleading). </pre> </blockquote> <pre wrap=""> It's the "always" I disagree with. Your comments are correct for the use cases you describe, but you are incorrectly implying other use cases don't exist. </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <pre wrap="">Let's say that I have a floatingpoint number with ten binary digits, so it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). I can use such a number to represent 1 + 2^10 exactly. </pre> </blockquote> <pre wrap=""> Well, yes, you can if we allow you an implied eleventh bit, as most significant, normalised to 1; thus your mantissa bit pattern becomes: 10000000001B </pre> <blockquote type="cite"> <pre wrap="">I can print this number out exactly in decimal using ten digits after the decimal point: 1.0009765625. That's legitimate, and potentially a good thing. </pre> </blockquote> </blockquote> <pre wrap=""> But 10000000001B does NOT mean "1 + 2^10". </pre> </blockquote> <pre wrap=""> In some specialized use cases, it means precisely this. In my program (assuming it's written correctly, etc.) the value means precisely what my program deems it to mean. </pre> <blockquote type="cite"> <pre wrap="">It means "with the limited precision I have, I can't represent the actual value of real number R more accurately with any other bit pattern than this one". </pre> </blockquote> <pre wrap=""> In the common use cases, this is a very good way to understand floatingpoint numbers. But, to reiterate my point, there are use cases where floatingpoint numbers are used to exactly represent a special subset of the real numbers. These floatingpoint numbers mean something exact, and therefore mean something different than your phrase, "with the limited precision I have." </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">Sure, this is not a common use case, but I would prefer that the software let me do this, and leave it up to me to know what I'm doing. </pre> </blockquote> <pre wrap=""> I would prefer that software didn't try to claim the indefensible. ... </pre> </blockquote> </blockquote> <pre wrap=""> I have use floatingpoint numbers and FPU's in situations where the correctness of the calculations depended upon the floatingpoint numbers representing specific real numbers exactly. I had to be very careful doing this to get it right, but I was careful and the calculations were correct. What more can I say? Such use cases exist. I have found it convenient (although hardly essential) to sometimes print those numbers out in decimal (because decimal representations are more familiar to me), and it was therefore helpful that my formatting routine didn't prevent me from doing this exactly. This is exactly my argument for why the behavior of gdtoa that you object to is good in certain specialized instances. This may be a use case that you have never encountered, but nevertheless, there it is. </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">... 1000000000100000000000000000000000000B Since you have only 11 bits of guaranteed binary precision available, you are making a sweeping assumption about those extra 26 bits; (they must ALL be zero). If you know for certain that this is so, then okay, </pre> </blockquote> </blockquote> <pre wrap=""> That's the point. There are specialized cases where I know with mathematically provable certainty that those extra bits are all zero. Not because they're stored in memory  they're not  but because my calculation was specifically structured so that they had to be. </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">but since you don't have those bits available, you have no technically defensible basis, in the general case, for making such an assumption; your argument is flawed, and IMO technically invalid. </pre> </blockquote> </blockquote> <pre wrap=""> On the contrary, even though the bits are not available (i.e. are not stored in memory), it is still possible in my specific case (not your general case) to know (not assume) that these bits are zero. My technically defensible basis for knowing this is that my calculation is structured so that these bits being zero is an invariant of the calculation. </pre> </blockquote> <br> This puzzles me! Do you mean testing of numerical routines? Can't see how general calculations can make use of machine numbers...<br> <br> <blockquote cite="mid:BANLkTi=aMb+QqnLAG_FGPm0J8t6Zp3gcw@..." type="cite"> <pre wrap=""> Look, I've never argued or implied that this sort of use case is in any way common, and I stated explicitly at the beginning of the discussion that this sort of use case is atypical. Your comments are quite correct for the way floatingpoint numbers are used the vast majority of the time. I'm just pointing out that there are some unusual use cases, and that I think it's good that gdtoa supports them. </pre> <blockquote type="cite"> <pre wrap="">... Chuck </pre> </blockquote> <pre wrap=""> Best regards. K. Frank </pre> </blockquote> </body> </html> 
From: Peter Rockett <p.rockett@sh...>  20110502 19:25:34

On 02/05/2011 17:00, Keith Marshall wrote: > On 02/05/11 16:30, Charles Wilson wrote: >>> So, we would like sqrt (x) and pow (x, 0.5) to agree. The point below about offtopic diversions into precision is valid but can I reiterate, there is no good reason why x^(0.5) calculated two different ways should agree since the two different routes will accumulate different rounding errors. Adding a special case to force agreement for x^n when n = 0.5 will introduce a discontinuity into the pow() function and is a very bad idea. The value of returned by pow(x, 0.5) might be less accurate compared to sqrt(x) but that is part of the fundamental issue with using floatingpoint numbers: they are only ever approximations. You have to take that fact on board. Consistent values for sqrt() and pow() across platforms is another issue... P. >>> We would >>> like compiletime and runtime evaluations to agree. We would like >>> crosscompilers and native compilers to agree. >> This is a binary bit pattern issue, not a gdtoa base 10 conversion >> issue. > Absolutely agree. Others side tracked the issue into the realms of > representable precision  a DIFFERENT issue around which confusion > appears to abound. This confusion is compounded by the gdtoa stopping > anomaly. Just wanted to clarify that. Enough said. > 
From: LRN <lrn1986@gm...>  20110502 18:48:33

On 02.05.2011 20:47, tal  טל hadad  חדד wrote: > I want to install gstearemer on MSYS(Win x32), but it depend on libxml2 package. > I tried this guide: http://gstreamer.freedesktop.org/wiki/BuildingGStreamerInMinGWMsys, but no matter what I do it fails. > So I thought for myself that I could ask you how to install libxml2 properly. The $path is set good, thats what I know that work well in MSYS :) > This is the error I recrecived while using the guide above in the second step(the "make" command): > > ..... > Creating library file: .libs/libxml2.dll.a > .libs/xmlIO.o: In function `xmlGzfileOpen_real': > c:/Downloads/libxml22.6.30/xmlIO.c:1132: undefined reference to `gzopen64' > .libs/xmlIO.o: In function `xmlGzfileOpenW': > c:/Downloads/libxml22.6.30/xmlIO.c:1200: undefined reference to `gzopen64' > collect2: ld returned 1 exit status > make[2]: *** [libxml2.la] Error 1 > make[2]: Leaving directory `/c/Downloads/libxml22.6.30' > make[1]: *** [allrecursive] Error 1 > make[1]: Leaving directory `/c/Downloads/libxml22.6.30' > make: *** [all] Error 2 > > Can you please help me? Does someone know how to install it properly? > What's your version of zlib? I remember having problems with older zlib 1.2.3, but it was updated to 1.2.5 recently, so it should be good. Look in /mingw/include/zlib.h, the line you need starts with "#define ZLIB_VERSION" 
From: tal  טל hadad  חדד <tal_hd@ho...>  20110502 17:47:57

I want to install gstearemer on MSYS(Win x32), but it depend on libxml2 package. I tried this guide: http://gstreamer.freedesktop.org/wiki/BuildingGStreamerInMinGWMsys, but no matter what I do it fails. So I thought for myself that I could ask you how to install libxml2 properly. The $path is set good, thats what I know that work well in MSYS :) This is the error I recrecived while using the guide above in the second step(the "make" command): ..... Creating library file: .libs/libxml2.dll.a .libs/xmlIO.o: In function `xmlGzfileOpen_real': c:/Downloads/libxml22.6.30/xmlIO.c:1132: undefined reference to `gzopen64' .libs/xmlIO.o: In function `xmlGzfileOpenW': c:/Downloads/libxml22.6.30/xmlIO.c:1200: undefined reference to `gzopen64' collect2: ld returned 1 exit status make[2]: *** [libxml2.la] Error 1 make[2]: Leaving directory `/c/Downloads/libxml22.6.30' make[1]: *** [allrecursive] Error 1 make[1]: Leaving directory `/c/Downloads/libxml22.6.30' make: *** [all] Error 2 Can you please help me? Does someone know how to install it properly? 
From: K. Frank <kfrank29.c@gm...>  20110502 17:17:36

Hello Charles and Keith! On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: > On 5/2/2011 2:34 AM, Keith Marshall wrote: >> On 01/05/11 14:36, K. Frank wrote: >>> [snip] >> My complaint is that, long after the last bit of precision has been >> interpreted, gdtoa (which is at the heart of printf()'s floating point >> output formatting) will continue to spew out extra decimal digits, based >> solely on the residual remainder from the preceding digit conversion, >> with an arbitrary number of extra zero bits appended. (Thus, gdtoa >> makes the unjustified and technically invalid assumption that the known >> bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply >> by appending zero bits in place of the less significant unknowns). >> >>> I don't agree with this. >> >> Well, you are entitled to your own opinion; we may agree to disagree. >> >>> In most cases, it is not helpful to print out a long double to more >>> than twenty decimal place, but sometimes it is. The point is that it >>> is not the case that floatingpoint numbers represent all real >>> numbers inexactly; rather, they represent only a subset of real >>> numbers exactly. > > But the problem is, if I send you a floating point number that > represents the specific real number which I have in mind, exactly, YOU > don't know that. True. But this does not apply to all use cases. If I have generated a floatingpoint number, and I happen to know that it was generated in a way that it exactly represents a specific real number, I am fully allowed to make use of this information. (If you send me a floatingpoint number, and don't provide me the guarantee that it is being used to represent a specific real number exactly  and this indeed is the most common, but not the only use case  then your comments are correct.) > All you have is a particular floating point number > that represents the range [valueulps/2, value+ulps/w). You have no idea > that I actually INTENDED to communicate EXACTLY "value" to you. Yes, in your use case you did not intend to communicate an exact value to me. Therefore I should not imagine the floatingpoint number you sent me to be exact. And I shouldn't use gdtoa to print it out to fifty decimal places. > ... >>> If I happen to be representing a real number exactly >>> with a long double, I might wish to print it out with lots (more than >>> twenty) decimal digits. Such a use case is admittedly rare, but not >>> illegitimate. > > No, that's always illegitimate (i.e. misleading). It's the "always" I disagree with. Your comments are correct for the use cases you describe, but you are incorrectly implying other use cases don't exist. > ... >> ... >>> Let's say that I have a floatingpoint number with ten binary digits, so >>> it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). >>> I can use such a number to represent 1 + 2^10 exactly. >> >> Well, yes, you can if we allow you an implied eleventh bit, as most >> significant, normalised to 1; thus your mantissa bit pattern becomes: >> >> 10000000001B >> >>> I can print this number out exactly in decimal using ten digits after >>> the decimal point: 1.0009765625. That's legitimate, and potentially a >>> good thing. > > But 10000000001B does NOT mean "1 + 2^10". In some specialized use cases, it means precisely this. In my program (assuming it's written correctly, etc.) the value means precisely what my program deems it to mean. > It means "with the limited > precision I have, I can't represent the actual value of real number R > more accurately with any other bit pattern than this one". In the common use cases, this is a very good way to understand floatingpoint numbers. But, to reiterate my point, there are use cases where floatingpoint numbers are used to exactly represent a special subset of the real numbers. These floatingpoint numbers mean something exact, and therefore mean something different than your phrase, "with the limited precision I have." > ... >>> Sure, this is not a common use case, but I would prefer that the software >>> let me do this, and leave it up to me to know what I'm doing. >> >> I would prefer that software didn't try to claim the indefensible. >> ... I have use floatingpoint numbers and FPU's in situations where the correctness of the calculations depended upon the floatingpoint numbers representing specific real numbers exactly. I had to be very careful doing this to get it right, but I was careful and the calculations were correct. What more can I say? Such use cases exist. I have found it convenient (although hardly essential) to sometimes print those numbers out in decimal (because decimal representations are more familiar to me), and it was therefore helpful that my formatting routine didn't prevent me from doing this exactly. This is exactly my argument for why the behavior of gdtoa that you object to is good in certain specialized instances. This may be a use case that you have never encountered, but nevertheless, there it is. >> ... >> >> 1000000000100000000000000000000000000B >> >> Since you have only 11 bits of guaranteed binary precision available, >> you are making a sweeping assumption about those extra 26 bits; (they >> must ALL be zero). If you know for certain that this is so, then okay, That's the point. There are specialized cases where I know with mathematically provable certainty that those extra bits are all zero. Not because they're stored in memory  they're not  but because my calculation was specifically structured so that they had to be. >> but since you don't have those bits available, you have no technically >> defensible basis, in the general case, for making such an assumption; >> your argument is flawed, and IMO technically invalid. On the contrary, even though the bits are not available (i.e. are not stored in memory), it is still possible in my specific case (not your general case) to know (not assume) that these bits are zero. My technically defensible basis for knowing this is that my calculation is structured so that these bits being zero is an invariant of the calculation. Look, I've never argued or implied that this sort of use case is in any way common, and I stated explicitly at the beginning of the discussion that this sort of use case is atypical. Your comments are quite correct for the way floatingpoint numbers are used the vast majority of the time. I'm just pointing out that there are some unusual use cases, and that I think it's good that gdtoa supports them. > ... > Chuck Best regards. K. Frank 
From: Keith Marshall <keithmarshall@us...>  20110502 16:00:42

On 02/05/11 16:30, Charles Wilson wrote: >> So, we would like sqrt (x) and pow (x, 0.5) to agree. We would >> like compiletime and runtime evaluations to agree. We would like >> crosscompilers and native compilers to agree. > > This is a binary bit pattern issue, not a gdtoa base 10 conversion > issue. Absolutely agree. Others side tracked the issue into the realms of representable precision  a DIFFERENT issue around which confusion appears to abound. This confusion is compounded by the gdtoa stopping anomaly. Just wanted to clarify that. Enough said.  Regards, Keith. 
From: Charles Wilson <cwilso11@us...>  20110502 15:31:00

On 5/2/2011 2:34 AM, Keith Marshall wrote: > On 01/05/11 14:36, K. Frank wrote: >> [snip] > My complaint is that, long after the last bit of precision has been > interpreted, gdtoa (which is at the heart of printf()'s floating point > output formatting) will continue to spew out extra decimal digits, based > solely on the residual remainder from the preceding digit conversion, > with an arbitrary number of extra zero bits appended. (Thus, gdtoa > makes the unjustified and technically invalid assumption that the known > bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply > by appending zero bits in place of the less significant unknowns). > >> I don't agree with this. > > Well, you are entitled to your own opinion; we may agree to disagree. > >> In most cases, it is not helpful to print out a long double to more >> than twenty decimal place, but sometimes it is. The point is that it >> is not the case that floatingpoint numbers represent all real >> numbers inexactly; rather, they represent only a subset of real >> numbers exactly. But the problem is, if I send you a floating point number that represents the specific real number which I have in mind, exactly, YOU don't know that. All you have is a particular floating point number that represents the range [valueulps/2, value+ulps/w). You have no idea that I actually INTENDED to communicate EXACTLY "value" to you. Ditto for the results of a long computation. I get back as the output some f.p. rep  I don't *know* that the actual result of the computation is exactly the value of that rep. All I know is the result is not representable with more accuracy by any OTHER f.p. rep with the same precision. >> If I happen to be representing a real number exactly >> with a long double, I might wish to print it out with lots (more than >> twenty) decimal digits. Such a use case is admittedly rare, but not >> illegitimate. No, that's always illegitimate (i.e. misleading). Imagine I wrote a scientific paper concerning an experiment with 17 trials, and my individual measurements had a precision of 3 sig. digits (all of the same order of magnitude). I can't say that the mean result had 20 sig. digits simply because I can't represent the result of dividing by 17 exactly using only 3 sig. digits. It's not accurate to extend the precision of the sum by appending zeros, simply so that I get more digits of "apparent precision" after dividing by 17. My paper would be rejected  and rightly so. > This may be acceptable, provided you understand that those additional > digits are of questionable accuracy. When you attempt to claim an > accuracy which simply isn't available, then I would consider that it is > most definitely illegitimate. > >> Let's say that I have a floatingpoint number with ten binary digits, so >> it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). >> I can use such a number to represent 1 + 2^10 exactly. > > Well, yes, you can if we allow you an implied eleventh bit, as most > significant, normalised to 1; thus your mantissa bit pattern becomes: > > 10000000001B > >> I can print this number out exactly in decimal using ten digits after >> the decimal point: 1.0009765625. That's legitimate, and potentially a >> good thing. But 10000000001B does NOT mean "1 + 2^10". It means "with the limited precision I have, I can't represent the actual value of real number R more accurately with any other bit pattern than this one". > Sorry, but I couldn't disagree more. See, here you are falling into the > gdtoa trap. You have an effective bit precision of eleven, which gives you: > > 11 / log2(10) = 3.311 decimal digits (i.e. 3 full digits) > >> If I limit myself to three digits after the decimal point I get 1.001 >> (rounding up). > > You can't even claim that. Once again, you are confusing decimal places > and significant digits. You may claim AT MOST 3 significant digits, and > those are 1.00; (significance begins at the leftmost nonzero digit > overall, not just after the radix point). See, here's the problem: 1.0009765625 means: I can distinguish between the following three numbers: a) 1.0009765624 b) 1.0009765625 c) 1.0009765626 and the real number R is closer to (b) than to (a) or (c). But with 10 bits, you CAN'T distinguish between those three numbers: the same 10 bit pattern must be used to represent all three. In fact, the best you can do with 10 bits is distinguish between the following three reps: (a) 0.999 (b) 1.00 (c) 1.01 (Note, because of the normalization shift between (a) and (b/c), the accuracy *appears* to change in magnitude by a factor of 10, but that's simply an artifact of the base10 representation  in base two the normalization shift effect is only a factor of 2, not 10). Once again, the real number R is closer to (b) than to (a) or (c), and that's the best you can do with 10 bits (3 significant decimal digits). >> Sure, this is not a common use case, but I would prefer that the software >> let me do this, and leave it up to me to know what I'm doing. > > I would prefer that software didn't try to claim the indefensible. Your > 1.0009765625 example represents 11 significant decimal digits of > precision. To represent that in binary, you need a minimum of: > > 11 * log2(10) = 36.54 bits > > (which we must round up to 37 bits). While a mantissa of 10000000001B > MAY equate exactly to your example value of 1 + 2^10, it is guaranteed > to be exact to 11 decimal digits of significance only if its normalised > 37bit representation is: > > 1000000000100000000000000000000000000B > > Since you have only 11 bits of guaranteed binary precision available, > you are making a sweeping assumption about those extra 26 bits; (they > must ALL be zero). If you know for certain that this is so, then okay, > but since you don't have those bits available, you have no technically > defensible basis, in the general case, for making such an assumption; > your argument is flawed, and IMO technically invalid. Agree. But this whole discussion is rather beside the point I think, which started with a real discrepancy in the actual bit pattern produced by sqrt(x) and pow(x, 0.5). e.g. > So, we would like sqrt (x) and pow (x, 0.5) to agree. We would like > compiletime and runtime evaluations to agree. We would like > crosscompilers and native compilers to agree. This is a binary bit pattern issue, not a gdtoa base 10 conversion issue.  Chuck 
From: Keith Marshall <keithmarshall@us...>  20110502 06:34:34

On 01/05/11 14:36, K. Frank wrote: > Hello Keith! > > On Sun, May 1, 2011 at 3:43 AM, Keith Marshall > <keithmarshall@...> wrote: >> On 30/04/11 23:14, Kai Tietz wrote: >>> long double (80bit): >>> digits for mantissa:64 >>> ... >> >> Alas, this sort of mathematical ineptitude seems to be all too common >> amongst the programming fraternity. It isn't helped by a flaw in the >> gdtoa implementation common on BSD and GNU/Linux, (and also adopted by >> Danny into mingwrt); it fails to implement the Steele and White stop >> condition correctly, continuing to spew out garbage beyond the last bit >> of available precision, creating an illusion of better accuracy than is >> really achievable. > > If I understand your point here, your complaint is that gdtoa will happily > generate more that twenty (or nineteen, if you will) decimal digits for a > gcc long double. (I won't speak to the technical correctness of gdtoa; > for all I know, it has bugs.) I think your point is that those "extra" decimal > digits represent false (and misleading) precision. > > (Sorry if I've misinterpreted your comment.) My complaint is that, long after the last bit of precision has been interpreted, gdtoa (which is at the heart of printf()'s floating point output formatting) will continue to spew out extra decimal digits, based solely on the residual remainder from the preceding digit conversion, with an arbitrary number of extra zero bits appended. (Thus, gdtoa makes the unjustified and technically invalid assumption that the known bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply by appending zero bits in place of the less significant unknowns). > I don't agree with this. Well, you are entitled to your own opinion; we may agree to disagree. > In most cases, it is not helpful to print out a long double to more > than twenty decimal place, but sometimes it is. The point is that it > is not the case that floatingpoint numbers represent all real > numbers inexactly; rather, they represent only a subset of real > numbers exactly. If I happen to be representing a real number exactly > with a long double, I might wish to print it out with lots (more than > twenty) decimal digits. Such a use case is admittedly rare, but not > illegitimate. This may be acceptable, provided you understand that those additional digits are of questionable accuracy. When you attempt to claim an accuracy which simply isn't available, then I would consider that it is most definitely illegitimate. > Let's say that I have a floatingpoint number with ten binary digits, so > it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). > I can use such a number to represent 1 + 2^10 exactly. Well, yes, you can if we allow you an implied eleventh bit, as most significant, normalised to 1; thus your mantissa bit pattern becomes: 10000000001B > I can print this number out exactly in decimal using ten digits after > the decimal point: 1.0009765625. That's legitimate, and potentially a > good thing. Sorry, but I couldn't disagree more. See, here you are falling into the gdtoa trap. You have an effective bit precision of eleven, which gives you: 11 / log2(10) = 3.311 decimal digits (i.e. 3 full digits) > If I limit myself to three digits after the decimal point I get 1.001 > (rounding up). You can't even claim that. Once again, you are confusing decimal places and significant digits. You may claim AT MOST 3 significant digits, and those are 1.00; (significance begins at the leftmost nonzero digit overall, not just after the radix point). > Sure, this is not a common use case, but I would prefer that the software > let me do this, and leave it up to me to know what I'm doing. I would prefer that software didn't try to claim the indefensible. Your 1.0009765625 example represents 11 significant decimal digits of precision. To represent that in binary, you need a minimum of: 11 * log2(10) = 36.54 bits (which we must round up to 37 bits). While a mantissa of 10000000001B MAY equate exactly to your example value of 1 + 2^10, it is guaranteed to be exact to 11 decimal digits of significance only if its normalised 37bit representation is: 1000000000100000000000000000000000000B Since you have only 11 bits of guaranteed binary precision available, you are making a sweeping assumption about those extra 26 bits; (they must ALL be zero). If you know for certain that this is so, then okay, but since you don't have those bits available, you have no technically defensible basis, in the general case, for making such an assumption; your argument is flawed, and IMO technically invalid.  Regards, Keith. 
From: Keith Marshall <keithmarshall@us...>  20110502 03:59:40

On 02/05/11 02:11, Charles Wilson wrote: > On 5/1/2011 6:29 PM, LRN wrote: >> There's a libgnurx library deep in mingw's sourceforge "Files" section. >> It should fit any C project. Not sure about C++ though. You might have >> to rebuild it from the source with the new compiler, since binaries are >> a bit old. I've attached the buildscript that seems to be working. > > It's based on the GNU regex.c/regex.h implementation, which is POSIX > RE's  not Perl Compatible RE's. It's also classified as a "User Contributed" package. My attitude to those has always been that I'm happy to host them in the SF package collection, but they will be maintained only to the extent that the original contributor, (or any other volunteer), is prepared to support them; they are not officially supported by MinGW.org, and may be susceptible to bitrot over time. >> Also, the gnurx package has never been updated to work with the >> mingwget installer system (but that would be a nice 'beginner' >> project if somebody wanted to get involved...) Sure, but still subject to its "User Contributed" classification, (unless that "somebody getting involved" is willing to commit to continuing long term support, as a project member). > A mingwpcre package is doable, but...I have reservations about > expanding the universe of separate packages "we" support officially at > mingw.org. I do too, but I wouldn't object to hosting this as another "User Contributed" package, (should anyone care to contribute it), again subject to the support restrictions that entails. >> Maybe we need a mingwports.sf.net project, like Yaakov S's >> cygports.sf.net... That's another possibility, for A. N. Other to set up and maintain.  Regards, Keith. 
From: Ocean <O<cean@co...>  20110502 01:30:40

> Original Message > From: Charles Wilson [mailto:cwilso11@...] > Sent: Sunday, May 01, 2011 9:12 PM > To: MinGW Users List > Subject: Re: [Mingwusers] How do I install PCRE? > > On 5/1/2011 6:29 PM, LRN wrote: > > There's a libgnurx library deep in mingw's sourceforge "Files" section. > > It should fit any C project. Not sure about C++ though. You might have > > to rebuild it from the source with the new compiler, since binaries are > > a bit old. I've attached the buildscript that seems to be working. > > It's based on the GNU regex.c/regex.h implementation, which is POSIX > RE's  not Perl Compatible RE's. Also, the gnurx package has never > been updated to work with the mingwget installer system (but that would > be a nice 'beginner' project if somebody wanted to get involved...) > > A mingwpcre package is doable, but...I have reservations about > expanding the universe of separate packages "we" support officially at > mingw.org. Maybe we need a mingwports.sf.net project, like Yaakov S's > cygports.sf.net... > >  > Chuck > Even if it's not officially supported, is there a guide anywhere that can walk me through the process of how I can get it running for myself? 
From: Jim Bell <Jim@JCBell.com>  20110502 01:18:03

On 1:59 PM, TSalm wrote: > Le 01/05/2011 19:10, Jim Bell a écrit : >> >> On 1:59 PM, LRN wrote: >>> On 01.05.2011 15:36, Earnie wrote: >>>> TSalm wrote: >>>>> Hi, >>>>> >>>>> Is there away to catch an exception and display his stacktrace ? >>>>> >>>> Look for DrMinGW and other JIT debuggers. >>>> >>> Also, http://code.google.com/p/backtracemingw/ >> Crossplatform, C++: >> <http://lists.boost.org/Archives/boost/2010/10/172303.php>;. (But I >> haven't tried it.) >  I've try the "boost" one, but it seems it don't display any > backtrace why MinGW. Too bad :( > The author seems like a very sharp guy who's pretty interested in portability. You might collaborate on a MinGW port. 
From: Charles Wilson <cwilso11@us...>  20110502 01:12:10

On 5/1/2011 6:29 PM, LRN wrote: > There's a libgnurx library deep in mingw's sourceforge "Files" section. > It should fit any C project. Not sure about C++ though. You might have > to rebuild it from the source with the new compiler, since binaries are > a bit old. I've attached the buildscript that seems to be working. It's based on the GNU regex.c/regex.h implementation, which is POSIX RE's  not Perl Compatible RE's. Also, the gnurx package has never been updated to work with the mingwget installer system (but that would be a nice 'beginner' project if somebody wanted to get involved...) A mingwpcre package is doable, but...I have reservations about expanding the universe of separate packages "we" support officially at mingw.org. Maybe we need a mingwports.sf.net project, like Yaakov S's cygports.sf.net...  Chuck 