From: Michel T. <ta...@lp...> - 2016-02-05 13:27:07
|
Le 05/02/2016 06:34, Gunter Königsmann a écrit : >> > >> >So here's this crazy thing that always bothered me: in most engineering >> >problems, the input data is accurate to few percent, i.e. has two or >> >three significant digits, well described by a 16-bit, >> >3-significant-digit-mantissa half-precision floating point format numbers. >> > >> >The main reason we use 32-bit 6-digit single precision or 64-bit >> >15-digit double precision numbers is that we have fallen into >> >complacency, relying on the excess precision preserving enough accuracy >> >from the input to the result. >> >As most people on this list know, this often pans out but mostly by dumb >> >luck and black magic: many common algorithms' error propagation >> >properties are poorly known and weakly understood. Recently someone here >> >published a beautiful example of a simple polynomial that calculates >> >consistent but incorrect results in both single and double precision, >> >which is a great trick to show to those who consider cross-checking of >> >single and double precision results to be the ultimate sophistication in >> >error analysis. >> > >> >I really believe that if you care about accuracy you really need to >> >either calculate the error propagation, or use interval arithmetic. If >> >you don't care, you might as well live with 1/3=0.33333 I can only concur with what you are saying. In fact catastrophic loss of precision can occur in the most simple and down to earth examples. Rump's example has the property of being excessively contrived, so one may think this is the sort of accident that doesn't happen to us. But let us consider the following very simple example: (%i1) p:prod(x-i/11,i,1,10)$ (%i2) q:float(expand(p))$ (%i3) float(subst(x=1/3,p)); (%o3) - 5.779969616435742e-8 (%i4) subst(x=1.0/3.0,q); (%o4) - 5.779969615892568e-8 (%i5) allroots(q); (%o5) [x = 0.09090909090909007, x = 0.1818181818181877, x = 0.2727272727262559, x = 0.3636363636426359, x = 0.4545454545306152, x = 0.5454545454688162, x = 0.6363636363647851, x = 0.7272727272593302, x = 0.8181818181917098, x = 0.9090909090885744] (%i6) makelist(float(i/11),i,1,10); (%o6) [0.09090909090909091, 0.1818181818181818, 0.2727272727272727, 0.3636363636363636, 0.4545454545454545, 0.5454545454545454, 0.6363636363636364, 0.7272727272727273, 0.8181818181818182, 0.9090909090909091] So we see that with a moderate sized product of ten terms, with no super big or super small argument in the computation we can get a result using floats which has lost 7 digits of precision when evaluating the polynomial at 1/3. Fortunately the numerical computation of roots is not that instable. In your problem where you have a number of capacitors, selfs and resistors, the impedance Z(omega) is a rational function of omega and the values of the capacitors, etc. What you see on the oscilloscope is the evaluation of this rational function as a function of the frequency omega, with a precision of the order of 10%. But you can expect large instability between this curve and the values of the coefficients. This causes a challenging problem if you want to tune e.g. the capacitors to obtain a definite function Z(omega), e.g. to get a filter, in the presence of other elements whose exact value is also not known with precision (resistors, selfs, etc.). Computing with exact rationals at least avoids the sort of above shown loss of precision. -- Michel Talon |