From: K. Frank <kfrank29.c@gm...>  20110502 21:13:38

Hi Peter! On Mon, May 2, 2011 at 3:42 PM, Peter Rockett <p.rockett@...> wrote: > Forgive the toppost but this whole disagreement seems concerned with > machine numbers, I like your term "machine numbers." I hadn't heard it before. I believe this is what I've been talking about. > that small set of floats that can be represented exactly: > the trailing zeroes beyond the leastsignificant mantissa bit are indeed > zeroes Yes, I'm talking about unusual use cases where the calculations work with that special set of real numbers that can be represented exactly by floatingpoint numbers. (Unless I misunderstand, these are your "machine numbers.") > Machine numbers have a important role in numerical algebra since > they are zeroerror numbers and hence probe the intrinsic accuracy of a > routine rather than routine + error of input quantity convolved together in > some way. But as far as I am aware, machine numbers are only ever used for > testing numerical routines  since you cannot say that a general float is a > machine number or just the nearest approximation to something nearby on the > real number line, I am puzzled by the points annotated at the bottom of this > post. Or do these refer only to testing? Testing of numerical routines  that's a good point. This is a use case I wasn't aware of before, but it certainly makes sense. > P. > > On 02/05/2011 18:17, K. Frank wrote: > > Hello Charles and Keith! > On Mon, May 2, 2011 at 11:30 AM, Charles Wilson wrote: > On 5/2/2011 2:34 AM, Keith Marshall wrote: > On 01/05/11 14:36, K. Frank wrote: > [snip] > ... >>> but since you don't have those bits available, you have no technically >>> defensible basis, in the general case, for making such an assumption; >>> your argument is flawed, and IMO technically invalid. >> >> On the contrary, even though the bits are not available (i.e. are not stored >> in memory), it is still possible in my specific case (not your general case) >> to know (not assume) that these bits are zero. My technically defensible >> basis for knowing this is that my calculation is structured so that these >> bits being zero is an invariant of the calculation. > > This puzzles me! Do you mean testing of numerical routines? Can't see how > general calculations can make use of machine numbers... I agree that general calculations don't make use of machine numbers. My point is that there are certain specialized calculations that do. You mentioned a use case new to me  testing of numerical routines. Some others are: Interval arithmetic. in which a floatingpoint number is an exact representation of a real number that is a strict upper (or lower) bound to a quantity being studied. As an optimization in arbitraryprecision arithmetic: There was a symbolic algebra package (I believe that it was Mathematica, but it may have been one of its predecessor) that used floatingpoint numbers to exactly represent real numbers, and then cut over to a fullblown arbitrary precision package when the result of a calculation required it. (As I recall, this optimization was only used in the first few version of the package. I believe that it was never bug free, presumably because it was too hard to detect the necessary cutover point correctly while still achieving useful gains in performance.) The use case I dealt with directly was that of implementing a randomnumber generator in the stack of an 80287 FPU. (I don't remember for sure, but I believe it was a linearcongruential generator.) The mathematics that gives you good quality random numbers requires that the calculations be carried out exactly, not just to some finite precision, that is, that they be carried out solely within the set of machine numbers. This is all I've been saying  that there are relatively rare, but legitimate use cases where you use floatingpoint numbers as machine numbers, that is to represent specific real numbers exactly. It's hardly a big deal, but in these cases it's nice to be able to print these numbers out exactly in decimal. Best. K. Frank 