From: Peter Rockett <p.rockett@sh...>  20110502 19:42:09

<!DOCTYPE HTML PUBLIC "//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO88591" httpequiv="ContentType"> </head> <body text="#000000" bgcolor="#ffffff"> Forgive the toppost but this whole disagreement seems concerned with <i>machine numbers</i>, that small set of floats that can be represented exactly: the trailing zeroes beyond the leastsignificant mantissa bit are indeed zeroes. Machine numbers have a important role in numerical algebra since they are zeroerror numbers and hence probe the intrinsic accuracy of a routine rather than routine + error of input quantity convolved together in some way. But as far as I am aware, machine numbers are only ever used for testing numerical routines  since you cannot say that a general float is a machine number or just the nearest approximation to something nearby on the real number line, I am puzzled by the points annotated at the bottom of this post. Or do these refer only to testing?<br> <br> P.<br> <br> <br> On 02/05/2011 18:17, K. Frank wrote: <blockquote cite="mid:BANLkTi=aMb+QqnLAG_FGPm0J8t6Zp3gcw@..." type="cite"> <pre wrap="">Hello Charles and Keith! On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: </pre> <blockquote type="cite"> <pre wrap="">On 5/2/2011 2:34 AM, Keith Marshall wrote: </pre> <blockquote type="cite"> <pre wrap="">On 01/05/11 14:36, K. Frank wrote: </pre> <blockquote type="cite"> <pre wrap="">[snip] </pre> </blockquote> <pre wrap="">My complaint is that, long after the last bit of precision has been interpreted, gdtoa (which is at the heart of printf()'s floating point output formatting) will continue to spew out extra decimal digits, based solely on the residual remainder from the preceding digit conversion, with an arbitrary number of extra zero bits appended. (Thus, gdtoa makes the unjustified and technically invalid assumption that the known bit precision may be arbitrarily extended to ANY LENGTH AT ALL, simply by appending zero bits in place of the less significant unknowns). </pre> <blockquote type="cite"> <pre wrap="">I don't agree with this. </pre> </blockquote> <pre wrap=""> Well, you are entitled to your own opinion; we may agree to disagree. </pre> <blockquote type="cite"> <pre wrap="">In most cases, it is not helpful to print out a long double to more than twenty decimal place, but sometimes it is. The point is that it is not the case that floatingpoint numbers represent all real numbers inexactly; rather, they represent only a subset of real numbers exactly. </pre> </blockquote> </blockquote> <pre wrap=""> But the problem is, if I send you a floating point number that represents the specific real number which I have in mind, exactly, YOU don't know that. </pre> </blockquote> <pre wrap=""> True. But this does not apply to all use cases. If I have generated a floatingpoint number, and I happen to know that it was generated in a way that it exactly represents a specific real number, I am fully allowed to make use of this information. (If you send me a floatingpoint number, and don't provide me the guarantee that it is being used to represent a specific real number exactly  and this indeed is the most common, but not the only use case  then your comments are correct.) </pre> <blockquote type="cite"> <pre wrap="">All you have is a particular floating point number that represents the range [valueulps/2, value+ulps/w). You have no idea that I actually INTENDED to communicate EXACTLY "value" to you. </pre> </blockquote> <pre wrap=""> Yes, in your use case you did not intend to communicate an exact value to me. Therefore I should not imagine the floatingpoint number you sent me to be exact. And I shouldn't use gdtoa to print it out to fifty decimal places. </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">If I happen to be representing a real number exactly with a long double, I might wish to print it out with lots (more than twenty) decimal digits. Such a use case is admittedly rare, but not illegitimate. </pre> </blockquote> </blockquote> <pre wrap=""> No, that's always illegitimate (i.e. misleading). </pre> </blockquote> <pre wrap=""> It's the "always" I disagree with. Your comments are correct for the use cases you describe, but you are incorrectly implying other use cases don't exist. </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <pre wrap="">Let's say that I have a floatingpoint number with ten binary digits, so it gives about three decimal digits of precision (2^10 == 1024 ~= 10^3). I can use such a number to represent 1 + 2^10 exactly. </pre> </blockquote> <pre wrap=""> Well, yes, you can if we allow you an implied eleventh bit, as most significant, normalised to 1; thus your mantissa bit pattern becomes: 10000000001B </pre> <blockquote type="cite"> <pre wrap="">I can print this number out exactly in decimal using ten digits after the decimal point: 1.0009765625. That's legitimate, and potentially a good thing. </pre> </blockquote> </blockquote> <pre wrap=""> But 10000000001B does NOT mean "1 + 2^10". </pre> </blockquote> <pre wrap=""> In some specialized use cases, it means precisely this. In my program (assuming it's written correctly, etc.) the value means precisely what my program deems it to mean. </pre> <blockquote type="cite"> <pre wrap="">It means "with the limited precision I have, I can't represent the actual value of real number R more accurately with any other bit pattern than this one". </pre> </blockquote> <pre wrap=""> In the common use cases, this is a very good way to understand floatingpoint numbers. But, to reiterate my point, there are use cases where floatingpoint numbers are used to exactly represent a special subset of the real numbers. These floatingpoint numbers mean something exact, and therefore mean something different than your phrase, "with the limited precision I have." </pre> <blockquote type="cite"> <pre wrap="">... </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">Sure, this is not a common use case, but I would prefer that the software let me do this, and leave it up to me to know what I'm doing. </pre> </blockquote> <pre wrap=""> I would prefer that software didn't try to claim the indefensible. ... </pre> </blockquote> </blockquote> <pre wrap=""> I have use floatingpoint numbers and FPU's in situations where the correctness of the calculations depended upon the floatingpoint numbers representing specific real numbers exactly. I had to be very careful doing this to get it right, but I was careful and the calculations were correct. What more can I say? Such use cases exist. I have found it convenient (although hardly essential) to sometimes print those numbers out in decimal (because decimal representations are more familiar to me), and it was therefore helpful that my formatting routine didn't prevent me from doing this exactly. This is exactly my argument for why the behavior of gdtoa that you object to is good in certain specialized instances. This may be a use case that you have never encountered, but nevertheless, there it is. </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">... 1000000000100000000000000000000000000B Since you have only 11 bits of guaranteed binary precision available, you are making a sweeping assumption about those extra 26 bits; (they must ALL be zero). If you know for certain that this is so, then okay, </pre> </blockquote> </blockquote> <pre wrap=""> That's the point. There are specialized cases where I know with mathematically provable certainty that those extra bits are all zero. Not because they're stored in memory  they're not  but because my calculation was specifically structured so that they had to be. </pre> <blockquote type="cite"> <blockquote type="cite"> <pre wrap="">but since you don't have those bits available, you have no technically defensible basis, in the general case, for making such an assumption; your argument is flawed, and IMO technically invalid. </pre> </blockquote> </blockquote> <pre wrap=""> On the contrary, even though the bits are not available (i.e. are not stored in memory), it is still possible in my specific case (not your general case) to know (not assume) that these bits are zero. My technically defensible basis for knowing this is that my calculation is structured so that these bits being zero is an invariant of the calculation. </pre> </blockquote> <br> This puzzles me! Do you mean testing of numerical routines? Can't see how general calculations can make use of machine numbers...<br> <br> <blockquote cite="mid:BANLkTi=aMb+QqnLAG_FGPm0J8t6Zp3gcw@..." type="cite"> <pre wrap=""> Look, I've never argued or implied that this sort of use case is in any way common, and I stated explicitly at the beginning of the discussion that this sort of use case is atypical. Your comments are quite correct for the way floatingpoint numbers are used the vast majority of the time. I'm just pointing out that there are some unusual use cases, and that I think it's good that gdtoa supports them. </pre> <blockquote type="cite"> <pre wrap="">... Chuck </pre> </blockquote> <pre wrap=""> Best regards. K. Frank </pre> </blockquote> </body> </html> 