From: michal kabata <michal@sf...>  20050723 00:08:42

Hi, thanks for all the answers, escpecialy to Pascal Bourguignon and Matthias Buelow. Just to straighten things out a bit  for me it's not the matter of fpu and the way floats are calculated. I've been trying to fight my way around it for few years. It's rather a matter of "I was expecting more" from lisp, and seeing same bug as in javascript is.. disappointing in a way. I guess I'll have to try using rationals somehow. And, just maybe, it would be a nice move to switch to doubles, and arrange readers other way round. I guess it would require some survey, what users want more by default  precision or speed. As I wrote, I'll try to think of a way around it, and I hope I'll come up with something (like using rationals if output has bigger precision then it should basing on input?), and I'll be happy to submit it for you to criticise or use. I keep wondering why I never saw it before... Best regards Mike 
From: Marco Antoniotti <marcoxa@cs...>  20050723 17:27:48

On Jul 22, 2005, at 8:06 PM, michal kabata wrote: > Hi, > > thanks for all the answers, escpecialy to Pascal Bourguignon and > Matthias Buelow. Just to straighten things out a bit  for me it's not > the matter of fpu and the way floats are calculated. I've been trying > to fight my way around it for few years. It's rather a matter of "I > was expecting more" from lisp, and seeing same bug as in javascript > is.. disappointing in a way. I guess I'll have to try using rationals > somehow. And, just maybe, it would be a nice move to switch to > doubles, and arrange readers other way round. I guess it would require > some survey, what users want more by default  precision or speed. I think your expectations are too high. You cannot work around this as long as you eventually go down to whichever floating point treatment your CPU is offering you. > As I wrote, I'll try to think of a way around it, and I hope I'll come > up with something (like using rationals if output has bigger precision > then it should basing on input?), and I'll be happy to submit it for > you to criticise or use. Rationals may pretty much do it for you, but then again, do not expect performance. Rational computations often get in the bignum territory pretty fast. At that point you are in pure software territory. Cheers  Marco Antoniotti http://bioinformatics.nyu.edu NYU Courant Bioinformatics Group tel. +1  212  998 3488 715 Broadway 10th FL fax. +1  212  998 3484 New York, NY, 10003, U.S.A. 
From: michal kabata <michal@sf...>  20050723 22:05:14

Marco Antoniotti wrote: > > I think your expectations are too high. You cannot work around this as > long as you eventually go down to whichever floating point treatment > your CPU is offering you. > Well, I think my expectations are realistic (rationals can do it for me, as you said it yourself), but my point of view may be a bit different. I find floating point design a faulty idea, so I usualy work my way around by avoiding floats. Designing something that must approximate values like 1/3 or 1/5 is a bit sick in my opinion. The bug related to my problem only reinforces my opinion about floats. Btw, if anyone would be interested why the cursed 1.1  0.9 != 0.2, I've put two files on the web, that may explain something  http://ai.noip.org/floats and http://ai.noip.org/floats1. Problem seems to be in calculation of the exponent, but thats only a first impression I got. This is of course more general then only this one expression. > > Rationals may pretty much do it for you, but then again, do not expect > performance. Rational computations often get in the bignum territory > pretty fast. At that point you are in pure software territory. > I sure hope so and performance is not what I need, luckly. If I need performace I still tend to write in pure C. I write mostly AI related stuff, and rather the kind that can run even few hours longer without causing me any trouble. Therefor I don't need high performance but I need reliability, and such errors are something I can't allow. Best regards, Mike 
From: Pascal Bourguignon <pjb@in...>  20050723 23:35:25

michal kabata writes: > Marco Antoniotti wrote: > > > > I think your expectations are too high. You cannot work around this as > > long as you eventually go down to whichever floating point treatment > > your CPU is offering you. > > > Well, I think my expectations are realistic (rationals can do it for me, > as you said it yourself), but my point of view may be a bit different. > I find floating point design a faulty idea, so I usualy work my way > around by avoiding floats. Designing something that must approximate > values like 1/3 or 1/5 is a bit sick in my opinion. > The bug related to my problem only reinforces my opinion about floats. > Btw, if anyone would be interested why the cursed 1.1  0.9 != 0.2, I've > put two files on the web, that may explain something  > http://ai.noip.org/floats and http://ai.noip.org/floats1. Problem > seems to be in calculation of the exponent, but thats only a first > impression I got. This is of course more general then only this one > expression. It's not so much the exponent that's the problem than the BASE of the mantisa and the exponent. First, mathematically and scientifically, there is no problem in 1.1E0  0.9E0 giving 2.00000005E0. These are floating point numbers and they're known not to behave like mathematical real numbers. Scientific numerical calculations with floating point numbers take into account these phenomena. You're teached in physics not to keep more than the number of significant digits, so if the input is given with 1 digit after the coma, we compute the formula and keep only one digit after the coma, so we know that 1.10.9 is computed as 1.1E00.9E0=2.00000005E0 gives 2.0. Now, for financial computing and other nonscientific needs, two ways are implemented to archive expected results. The first is to automate te above consideration. We compute with the 6 significant digits of float, but at the end we round the result: (* (round ( 1.1 0.9) 0.1) 0.1) > 0.2 This is what is done by pocket calculators, which usually compute with 13 significant digits but display only 11 significant digits. The alternative, which was implemented often in early computers, is to use floating point numbers with decimal base instead of binary base. Then there's no approximation for base conversion. For scientific computing, this makes no difference there still remains the normal computing errors of all floating points (that's why it has not been kept, binary numbers are more efficient in a computer), but for simple business computing it gives seemingly exact results. > > Rationals may pretty much do it for you, but then again, do not expect > > performance. Rational computations often get in the bignum territory > > pretty fast. At that point you are in pure software territory. > > > I sure hope so and performance is not what I need, luckly. If I need > performace I still tend to write in pure C. > I write mostly AI related stuff, and rather the kind that can run even > few hours longer without causing me any trouble. Therefor I don't need > high performance but I need reliability, and such errors are something I > can't allow. I bet for your problem the best will be to keep track of the precision of your numbers and to round the results. It's not too hard for simple operations. If you have long chains of computing, then you must analyze the error propagation, like for any other numerical algorithm. The alternative could be to implement decimal floating point numbers, but here you're on your own: all contemporaneous floating point hardware is binary based. (But most CPU still have a BCD add and subtract).  __Pascal Bourguignon__ http://www.informatimago.com/ Nobody can fix the economy. Nobody can be trusted with their finger on the button. Nobody's perfect. VOTE FOR NOBODY. 
From: Pascal Bourguignon <pjb@in...>  20050723 23:06:20

michal kabata writes: > I keep wondering why I never saw it before... Perhaps because you're not a "Computer Scientist"? ;) I should have mentionned: "What Every Computer Scientist Should Know About FloatingPoint Arithmetic" http://docs.sun.com/source/8063568/ncg_goldberg.html too.  __Pascal Bourguignon__ http://www.informatimago.com/ Nobody can fix the economy. Nobody can be trusted with their finger on the button. Nobody's perfect. VOTE FOR NOBODY. 
From: michal kabata <michal@sf...>  20050724 08:04:05

Pascal Bourguignon wrote: > michal kabata writes: > >>I keep wondering why I never saw it before... > > > Perhaps because you're not a "Computer Scientist"? ;) > Guess again. Anyway, I ment lisp, and possibly correct guess should be that I have too little experience with lisp and too high opinion about those who write it. > I should have mentionned: > "What Every Computer Scientist Should Know About FloatingPoint Arithmetic" > http://docs.sun.com/source/8063568/ncg_goldberg.html > too. > Oh please... Anyway, I ment to end this discussion and I hope this is the last try. Thanks for all workaround ideas. I wrote the first email with silly hope that you just might be interested in a fact that clisp is _making an error_ (like most other langs, ok  but is that excause?). I guess OpenSource on one side and pride on the other. I can understand your way of thinking, someone who reports such trivial bug must be an idiot who you can insult all the way. I would be very happy to report any more serious bugs inf future, but so happened that I encounter this one, and in my opinion _in lisp_ this is a bug. Remember all those fantastic words in most books about lisp that "lisp is always doing the right thing with numbers"? Well it does not in this case. BTW:  mathematically 1.10.9 is not greater or less then 0.2, just equal, so don't say that such computation is correct mathematically. And if something is behaving not like mathematical number then how could it be mathematically correct?  base is very very related to "calculation of the exponent", thats why I wrote about calculation, not the exponent  if you are teached in physics not to do it, then why clisp does it? for basics operations (,+,*) I can't see the need to keep garbage after the digits you expect in result So again, for me this is EOT. Since I'm begining to get more insults that meritoric answers and I don't see anyone interested in fixing this. Best regards, Mike 
From: Pascal Bourguignon <pjb@in...>  20050724 09:13:33

michal kabata writes: > Pascal Bourguignon wrote: > > michal kabata writes: > > > >>I keep wondering why I never saw it before... > > > > > > Perhaps because you're not a "Computer Scientist"? ;) This is a smiley>^^^ > Guess again. Anyway, I ment lisp, and possibly correct guess should be > that I have too little experience with lisp and too high opinion about > those who write it. > > > I should have mentionned: > > "What Every Computer Scientist Should Know About FloatingPoint Arithmetic" > > http://docs.sun.com/source/8063568/ncg_goldberg.html > > too. > > > Oh please... > > Anyway, I ment to end this discussion and I hope this is the last try. > Thanks for all workaround ideas. I wrote the first email with silly > hope that you just might be interested in a fact that clisp is _making > an error_ (like most other langs, ok  but is that excause?). I guess > OpenSource on one side and pride on the other. I can understand your way > of thinking, someone who reports such trivial bug must be an idiot who > you can insult all the way. I would be very happy to report any more > serious bugs inf future, but so happened that I encounter this one, and > in my opinion _in lisp_ this is a bug. Remember all those fantastic > words in most books about lisp that "lisp is always doing the right > thing with numbers"? Well it does not in this case. > > BTW: >  mathematically 1.10.9 is not greater or less then 0.2, just equal, so > don't say that such computation is correct mathematically. And if > something is behaving not like mathematical number then how could it be > mathematically correct? You don't understand what I mean because you don't take into account the notation. In maths, when you write 1.1 you may mean one of two things. Either the element of Q=Z/Z*, or the element of R. Happily, there are isomorphisms betweeb Z/Z* and a subset of R, in which happily 1.1 belong, and therefore people often forget which one they refer to when they write 1.1. Now, in IEEE754 arithmetic, there is no 1.1 number. This is the reason why ( 1.1 0.9) doesn't give the result you want. This is due to the fact that the set of numbers in floating points data types are generated from base two, and you've noted a number in base ten. (Keep in mind when reading the following sums that only the left hand side of ~= is exact mathematics. What's on the right hand side is an aproximation in base ten. The last value of the right hand side is computed from the rational on the left hand side, not as the sum of the above numbers for these above numbers are truncated by the lack of precision of doublefloat!) [48]> (computebinfloat "1.00011001100110011001101") 1.00011001100110011001101 = 1 + 0 * 2^ 1 = 0 ~= 0.00000000000000000 + 0 * 2^ 2 = 0 ~= 0.00000000000000000 + 0 * 2^ 3 = 0 ~= 0.00000000000000000 + 1 * 2^ 4 = 1/16 ~= 0.06250000000000000 + 1 * 2^ 5 = 1/32 ~= 0.03125000000000000 + 0 * 2^ 6 = 0 ~= 0.00000000000000000 + 0 * 2^ 7 = 0 ~= 0.00000000000000000 + 1 * 2^ 8 = 1/256 ~= 0.00390625000000000 + 1 * 2^ 9 = 1/512 ~= 0.00195312500000000 + 0 * 2^10 = 0 ~= 0.00000000000000000 + 0 * 2^11 = 0 ~= 0.00000000000000000 + 1 * 2^12 = 1/4096 ~= 0.00024414062500000 + 1 * 2^13 = 1/8192 ~= 0.00012207031250000 + 0 * 2^14 = 0 ~= 0.00000000000000000 + 0 * 2^15 = 0 ~= 0.00000000000000000 + 1 * 2^16 = 1/65536 ~= 0.00001525878906250 + 1 * 2^17 = 1/131072 ~= 0.00000762939453125 + 0 * 2^18 = 0 ~= 0.00000000000000000 + 0 * 2^19 = 0 ~= 0.00000000000000000 + 1 * 2^20 = 1/1048576 ~= 0.00000095367431641 + 1 * 2^21 = 1/2097152 ~= 0.00000047683715820 + 0 * 2^22 = 0 ~= 0.00000000000000000 + 1 * 2^23 = 1/8388608 ~= 0.00000011920928955 = 9227469/8388608 ~= 1.10000002384185800 9227469/8388608 [49]> (computebinfloat "1.00011001100110011001100") 1.00011001100110011001100 = 1 + 0 * 2^ 1 = 0 ~= 0.00000000000000000 + 0 * 2^ 2 = 0 ~= 0.00000000000000000 + 0 * 2^ 3 = 0 ~= 0.00000000000000000 + 1 * 2^ 4 = 1/16 ~= 0.06250000000000000 + 1 * 2^ 5 = 1/32 ~= 0.03125000000000000 + 0 * 2^ 6 = 0 ~= 0.00000000000000000 + 0 * 2^ 7 = 0 ~= 0.00000000000000000 + 1 * 2^ 8 = 1/256 ~= 0.00390625000000000 + 1 * 2^ 9 = 1/512 ~= 0.00195312500000000 + 0 * 2^10 = 0 ~= 0.00000000000000000 + 0 * 2^11 = 0 ~= 0.00000000000000000 + 1 * 2^12 = 1/4096 ~= 0.00024414062500000 + 1 * 2^13 = 1/8192 ~= 0.00012207031250000 + 0 * 2^14 = 0 ~= 0.00000000000000000 + 0 * 2^15 = 0 ~= 0.00000000000000000 + 1 * 2^16 = 1/65536 ~= 0.00001525878906250 + 1 * 2^17 = 1/131072 ~= 0.00000762939453125 + 0 * 2^18 = 0 ~= 0.00000000000000000 + 0 * 2^19 = 0 ~= 0.00000000000000000 + 1 * 2^20 = 1/1048576 ~= 0.00000095367431641 + 1 * 2^21 = 1/2097152 ~= 0.00000047683715820 + 0 * 2^22 = 0 ~= 0.00000000000000000 + 0 * 2^23 = 0 ~= 0.00000000000000000 = 2306867/2097152 ~= 1.09999990463256840 2306867/2097152 [50]> (computebinfloat "0.11100110011001100110011") 0.11100110011001100110011 = 0 + 1 * 2^ 1 = 1/2 ~= 0.50000000000000000 + 1 * 2^ 2 = 1/4 ~= 0.25000000000000000 + 1 * 2^ 3 = 1/8 ~= 0.12500000000000000 + 0 * 2^ 4 = 0 ~= 0.00000000000000000 + 0 * 2^ 5 = 0 ~= 0.00000000000000000 + 1 * 2^ 6 = 1/64 ~= 0.01562500000000000 + 1 * 2^ 7 = 1/128 ~= 0.00781250000000000 + 0 * 2^ 8 = 0 ~= 0.00000000000000000 + 0 * 2^ 9 = 0 ~= 0.00000000000000000 + 1 * 2^10 = 1/1024 ~= 0.00097656250000000 + 1 * 2^11 = 1/2048 ~= 0.00048828125000000 + 0 * 2^12 = 0 ~= 0.00000000000000000 + 0 * 2^13 = 0 ~= 0.00000000000000000 + 1 * 2^14 = 1/16384 ~= 0.00006103515625000 + 1 * 2^15 = 1/32768 ~= 0.00003051757812500 + 0 * 2^16 = 0 ~= 0.00000000000000000 + 0 * 2^17 = 0 ~= 0.00000000000000000 + 1 * 2^18 = 1/262144 ~= 0.00000381469726562 + 1 * 2^19 = 1/524288 ~= 0.00000190734863281 + 0 * 2^20 = 0 ~= 0.00000000000000000 + 0 * 2^21 = 0 ~= 0.00000000000000000 + 1 * 2^22 = 1/4194304 ~= 0.00000023841857910 + 1 * 2^23 = 1/8388608 ~= 0.00000011920928955 = 7549747/8388608 ~= 0.89999997615814210 7549747/8388608 [51]> (computebinfloat "0.11100110011001100110100") 0.11100110011001100110100 = 0 + 1 * 2^ 1 = 1/2 ~= 0.50000000000000000 + 1 * 2^ 2 = 1/4 ~= 0.25000000000000000 + 1 * 2^ 3 = 1/8 ~= 0.12500000000000000 + 0 * 2^ 4 = 0 ~= 0.00000000000000000 + 0 * 2^ 5 = 0 ~= 0.00000000000000000 + 1 * 2^ 6 = 1/64 ~= 0.01562500000000000 + 1 * 2^ 7 = 1/128 ~= 0.00781250000000000 + 0 * 2^ 8 = 0 ~= 0.00000000000000000 + 0 * 2^ 9 = 0 ~= 0.00000000000000000 + 1 * 2^10 = 1/1024 ~= 0.00097656250000000 + 1 * 2^11 = 1/2048 ~= 0.00048828125000000 + 0 * 2^12 = 0 ~= 0.00000000000000000 + 0 * 2^13 = 0 ~= 0.00000000000000000 + 1 * 2^14 = 1/16384 ~= 0.00006103515625000 + 1 * 2^15 = 1/32768 ~= 0.00003051757812500 + 0 * 2^16 = 0 ~= 0.00000000000000000 + 0 * 2^17 = 0 ~= 0.00000000000000000 + 1 * 2^18 = 1/262144 ~= 0.00000381469726562 + 1 * 2^19 = 1/524288 ~= 0.00000190734863281 + 0 * 2^20 = 0 ~= 0.00000000000000000 + 1 * 2^21 = 1/2097152 ~= 0.00000047683715820 + 0 * 2^22 = 0 ~= 0.00000000000000000 + 0 * 2^23 = 0 ~= 0.00000000000000000 = 1887437/2097152 ~= 0.90000009536743160 1887437/2097152 0.89999997615814210 So why don't you ask: [52]> ( 1.10000002384185800L0 0.89999997615814210L0) 0.20000004768371590006L0 [53]> ( 1.09999990463256840L0 0.90000009536743160L0) 0.19999980926513680001L0 instead of ( 1.1 0.9) ? I hear you say that what you want is 1.1, but the problem is that there's no way of representing 1.1 with IEEE754 floating point numbers! The closer you can come are one of these two numbers I represent in base ten: 1.10000002384185800L0 or 1.09999990463256840L0. Since 1.10000002384185800L0 is closer, you should not be surprized that the convertion routine choose this number to represent 1.1L0. And since 0.89999997615814210 is closer to 0.9L0, it's the one choosen. Therefore you get the exact result 0.20000004768371590006L0 (or 0.20000005 when working with signle floats). >  base is very very related to "calculation of the exponent", thats why > I wrote about calculation, not the exponent >  if you are teached in physics not to do it, then why clisp does it? clisp does it because it's what is specified by Common Lisp. Common Lisp does it because the people who designed it constated that implementors did all implement the notation 1.1 to mean the floating point number whose value is closest to the mathematical real 1.1. Implementors all did this because their customers all asked for it. People with serrious budget to do serrious work with computer workstations that costed at the time around $200,000.00 or more. And the reason why computers don't compute as we do, is that it's not efficient to do it as we do. If you need it, you can implement the arithmetic algorithms we use in a computer. But you'll run thousands or millions times slower than a IEEE floating point unit. It may not matter for you, but for these serrious people who did nuclear explosion simulations or who programmed misile autopilot CPU, it was important to get the results fast. > for basics operations (,+,*) I can't see the need to keep garbage after > the digits you expect in result > > So again, for me this is EOT. Since I'm begining to get more insults > that meritoric answers and I don't see anyone interested in fixing this. Feel less insulted and try to learn more. (defmacro wsiosbp (&body body) (let ((vpack (gensym))) `(let ((,vpack *package*)) (withstandardiosyntax (let ((*package* ,vpack)) ,@body))))) (defmacro genieeeencoding (name type exponentbits mantissabits) ;; Thanks to ivan4th (~ivan_iv@...) for correcting an offby1 (wsiosbp `(progn (defun ,(intern (format nil "~ATOIEEE754" name)) (float) (multiplevaluebind (mantissa exponent sign) (integerdecodefloat float) (dpb (if (minusp sign) 1 0) (byte 1 ,(1 (+ exponentbits mantissabits))) (dpb (+ ,(+ ( (expt 2 (1 exponentbits)) 2) mantissabits) exponent) (byte ,exponentbits ,(1 mantissabits)) (ldb (byte ,(1 mantissabits) 0) mantissa))))) (defun ,(intern (format nil "IEEE754TO~A" name)) (ieee) (let ((aval (scalefloat (coerce (dpb 1 (byte 1 ,(1 mantissabits)) (ldb (byte ,(1 mantissabits) 0) ieee)) ,type) ( (ldb (byte ,exponentbits ,(1 mantissabits)) ieee) ,(1 (expt 2 (1 exponentbits))) ,(1 mantissabits))))) (if (zerop (ldb (byte 1 ,(1 (+ exponentbits mantissabits))) ieee)) aval ( aval))))))) (genieeeencoding float32 'singlefloat 8 24) (genieeeencoding float64 'doublefloat 11 53) (defun testieeereaddouble () (withopenfile (in "value.ieee754double" :direction :input :elementtype '(unsignedbyte 8)) (loop while (< (fileposition in) (filelength in)) do (loop repeat 8 for i = 1 then (* i 256) for v = (readbyte in) then (+ v (* i (readbyte in))) finally (progn (let ((*printbase* 16)) (princ v)) (princ " ") (princ (IEEE754TOFLOAT64 v)) (terpri)))))) (defun testieeereadsingle () (withopenfile (in "value.ieee754single" :direction :input :elementtype '(unsignedbyte 8)) (loop while (< (fileposition in) (filelength in)) do (loop repeat 4 for i = 1 then (* i 256) for v = (readbyte in) then (+ v (* i (readbyte in))) finally (progn (let ((*printbase* 16)) (princ v)) (princ " ") (princ (IEEE754TOFLOAT32 v)) (terpri)))))) (defun testsingletoieee (&rest args) (dolist (arg args) (format t "~16,8R ~A~%" (float32toieee754 (coerce arg 'singlefloat)) arg))) (defun testdoubletoieee (&rest args) (dolist (arg args) (format t "~16,16R ~A~%" (float64toieee754 (coerce arg 'doublefloat)) arg))) (defun converttobinary (number significantdigits) (multiplevaluebind (int dec) (truncate number) (format nil "~2R.~{~:[0~;1~]~}" int (loop with digits = '() repeat significantdigits do (multiplevaluebind (digit rest) (truncate (* 2 dec)) (push (oddp digit) digits) (setf dec rest)) finally (return (nreverse digits)))))) (defun computebinfloat (value) "1.00011001100110011001101" (flet ((rattobin (n) (/ (coerce (NUMERATOR n) 'DOUBLEfloat) (coerce (DENOMINATOR n) 'DOUBLEfloat)))) (let ((dot (position (character ".") value))) (if dot (loop with integerpart = (parseinteger value :radix 2 :end dot :junkallowed nil) with decimalpart = 0 initially (format t "~A ~%= ~A~%" value integerpart) for exponent from 1 by 1 for digit across (subseq value (1+ dot)) for weight = (/ (digitcharp digit) (expt 2 ( exponent))) do (format t "+ ~1A * 2^~3D = ~30A ~~= ~24,17F~%" digit exponent weight (rattobin weight)) (incf decimalpart weight) finally (format t "= ~42A ~~= ~24,17F" (+ integerpart decimalpart) (rattobin (+ integerpart decimalpart))) (return (+ integerpart decimalpart))) (let ((integerpart (parseinteger value :radix 2 :junkallowed nil))) (format t "~A is a binary integer worth ~A" value integerpart) integerpart))))) (converttobinary 9/10) (computebinfloat "1.00011001100110011001101")  __Pascal Bourguignon__ http://www.informatimago.com/ You never feed me. Perhaps I'll sleep on your face. That will sure show you. 
From: Devon Sean McCullough <LispHacker@Jovi.Net>  20050724 16:53:41

From: michal kabata <michal@...> Date: Sun, 24 Jul 2005 10:02:00 +0200 ... clisp is _making an error_ ... For the record, it is no error. Lisp offers access to the bare floating point hardware and CLISP keeps that promise. Peace Devon PS: Rationals are there, the choice is yours. PPS: Yes, I repeat up front what PJB said in the middle of a long message. PPPS: I am curious, does anyone have a numeric "measurement" datatype which preserves error information, e.g., value with accuracy, increasing and decreasing precision as needed? I've been expecting it in calculators since the HP35 but these days I calculate rarely. PPPPS: More elaborately, one might represent a measurement as a Gaussian whose shape would likely become quite nightmarish after only a few computations but we have the cycles! Perhaps such a proposal is best documented on the first of April? 
From: Pascal Bourguignon <pjb@in...>  20050725 01:18:17

Sorry, these examples are wrong. Pascal Bourguignon writes: > 1.1 > (11,1) > 123.456 > (123456,3) > > (+ 123.456 1.1) > (124556,3) = 124.556 > (* 123.456 1.1) > (13580160,5) = 135.80160 > (/ 10.0 0.3) > (333,1) = 33.3 > > (/ 1 3) > 1/3 ; rational > (/ 1. 3.) > (33,2) = 0.33 ; decimal+precision > (/ 1.0 3.00) > (33333,5) = 0.33333 ; decimal+precision What would you expect for: (+ 123.456 1.1) ? 6 significant digits + 2 significant digits > 2 significant digits, no? (+ 123.456 1.1) > 12. * 10^1 ? If 1.1 means that we don't know what is the value within 1.05 and 1.15 (+ 123.456 1.1) > anything between 124.5055 and 124.6065 > 124.55 ? Well not exactly 124.55 would denode anything between 124.545 and 124.555 but the real result is anything between 124.5055 and 124.6065 Therefore you'll need to give the error interval too so you cannot just read 1.1. #0.05d1.1 == [1.10.05,1.1+0.05] ( #0.05d1.1 #0.05d0.9) > #0.1d0.2 (anything between 0.1 and 0.3) (Note that 0.200005 is well withing [0.1,0.3].) The whole point is that it really depends on your application needs and you must decide what rules you need to implement.  A: Because it messes up the order in which people normally read text. Q: Why is topposting such a bad thing? A: Topposting. Q: What is the most annoying thing on usenet and in email? __Pascal Bourguignon__ http://www.informatimago.com/ 