From: Richard F. <fa...@be...> - 2014-09-21 14:36:03
|
On 9/21/2014 12:59 AM, Jan Hendrik Mueller wrote: > Maybe I overlooked that: What was the "old" way maxima computed e and pi? > Is there a possibility to look at these codes for non developers? > bw > Jan You can download the current source code and look at the file float.lisp Or you can presumably download the source code for just float.lisp https://github.com/andrejv/maxima/tree/master/src click on float.lisp The code in float.lisp was originally written by me (RJF), circa 1974 but has been massively rewritten and extended by others, esp. Ray Toy. But others too. ( In spite of the appearance of the copyright by Bill Schelter (1984, 1987), that's just what he did to all the source files when he got them from the Dept of Energy library. I think he made no significant changed to float.lisp) |
From: Henry B. <hb...@pi...> - 2014-09-21 15:48:17
|
How come Maxima doesn't use "spigot algorithms" for pi, e, ln(2), etc. ? https://en.wikipedia.org/wiki/Spigot_algorithm |
From: Stavros M. (Σ. Μ. <mac...@al...> - 2014-09-21 16:59:00
|
Why should it? Memory is plentiful these days, and spigot algorithms are slower than the alternative. On Sun, Sep 21, 2014 at 11:47 AM, Henry Baker <hb...@pi...> wrote: > How come Maxima doesn't use "spigot algorithms" for pi, e, ln(2), etc. ? > > https://en.wikipedia.org/wiki/Spigot_algorithm > > > > ------------------------------------------------------------------------------ > Slashdot TV. Video for Nerds. Stuff that Matters. > > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > |
From: Richard F. <fa...@be...> - 2014-09-21 16:10:23
|
Thanks for the history. This is the kind of thing I would like to see in the file, nothidden in the source-code-control comments or entirely missing... the original methods were the ones I wrote in 1974. An efficient computation is usually not very important since a given run will probably call the program rarely. Once a high-precision value is computed, it is cached and all lower-precision requests are done by rounding down. On the other hand, people sometimes compare systems by metrics like "How long does it take to compute the 100,000 digit value of pi?" in Mathematica 9, Timing[N[Pi,10000]] takes 0.0312 seconds. (the second time takes 0. seconds). while in maxima 5.31.2, gcl 2.6.8 fpprec:10000 takes 0.03 seconds (the second time takes 0.0 seconds) then bfloat(%pi)$ takes 0.22 seconds (the second time takes 0.0 seconds) So obviously Mathematica 9 is 7 times better than Maxima :) I don't know what the underlying bignum integer arithmetic is on the two systems, and whether GCL or Mathematica is using GMP arithmetic, and whether if it is, if it is using GMP arithmetic tuned to the Pentium 4 I am using (tuned asm code runs several times faster than genericC). and how it compares to MFPR's code, which includes the following — Function: int*mpfr_const_log2*(mpfr_t rop, mpfr_rnd_t rnd) — Function: int*mpfr_const_pi*(mpfr_t rop, mpfr_rnd_t rnd) — Function: int*mpfr_const_euler*(mpfr_t rop, mpfr_rnd_t rnd) — Function: int*mpfr_const_catalan*(mpfr_t rop, mpfr_rnd_t rnd) The code for computing pi in mpfr can be found here https://github.com/dschatzberg/mpfr/blob/master/src/const_pi.c but there is no useful comment there describing the method. And the time would be heavily dependent, I think, on the GMP version, anyway. In general I defer to people who devote large parts of their waking hours to optimizing these procedures, and so loading mpfr into Maxima as the back-end for all of bigfloats would be an interesting prospect. I have, from time to time, loaded mpfr into lisp for experiments, like can you multiply polynomials fast by chunking the coefficients into very long integers. Except that only some Lisps allow loading foreign libraries, so not all Maximas (esp. GCL versions) can do it. An alternative to loading MPFR is to encourage Volker to do a really nice job. Thanks, Volker! There is a lot of interest in some circles in "experimental mathematics" around computing lots of digits of pi... see http://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula I do not know if the splitting formula is the very fastest... I found this reference http://www.craig-wood.com/nick/articles/pi-chudnovsky/ which is not the most erudite discussion, but he claims his new code is faster than "BPP". However most of the time is spent in computing the sqrt of an integer. Oh, while I'm talking about foreign-language libraries, there's a bunch of people using python developing sympy. It might be plausible to link the sympy library to Maxima, if there are features in it that are of interest. I think that much of thesympy development to date has been to (mostly naively) duplicate superficial features of Maxima, ... superficial because they may depend on proceduresthat are less effective and run slower. Eventually there may be situations wherethe python procedure goes further and/or faster. I wouldn't expect python generallyto be faster than compiled lisp, but particular procedures might have better algorithms, and who knows if python will eventually become "fast". There is, for example, nothing to prevent someone including more features in the sympy programs for "solve" or "integrate". Although my initial impression is that sympy authors are fully capable of making bad design choices that they would not make if they studied the literature or the "competition". RJF On 9/21/2014 8:19 AM, Volker van Nek wrote: > Hendrik, some Details on e and pi. > > The computation of e and pi is done by the functions fpe1 resp. fppi1 > which are - as Richard Fateman already posted - in src/float.lisp . > > Before 2007 e was computed by it's Taylor series and pi by the formula > pi = 6*asin(1/2) and the asin-series. > > In 2007 I replaced the asin-formula by the Chudnovsky-formula and for > the computation of e I implemented a mechanism that combines a lot of > Taylor summands to one. This mechanism uses the fact that division is > expensive and multiplication of large integers is much cheaper. > At that time I didn't know binary splitting. Binary splitting is also a > mechanism to combine Taylor summands and also uses large integer > multiplication but in contrast to my approach it is a recursive divide > and conquer algorithm and provides better timings for high precisions. > > Very soon I will replace the current versions by the binary splitting > variants (which are attached to my first mail in this thread). > > Volker van Nek > > Am 21.09.2014 09:59, schrieb Jan Hendrik Mueller: >> Maybe I overlooked that: What was the "old" way maxima computed e and pi? >> Is there a possibility to look at these codes for non developers? >> bw >> Jan >> >>> Am 20.09.2014 um 22:53 schrieb Raymond Toy <toy...@gm...>: >>> >>> >>> >>>> On Sat, Sep 20, 2014 at 2:58 AM, Volker van Nek <vol...@gm...> wrote: >>>> Am 19.09.2014 20:35, schrieb Raymond Toy: >>>>>>>>>> "Volker" == Volker van Nek <vol...@gm...> writes: >>>>> Volker> [ I also implemented exp(x), log(x), sin(x) and friends for bigfloat x. >>>>> Volker> In contrast to fpe1, .. these implementions are still some kind of raw >>>>> Volker> material and need more error analysis. And of course it's a lot more >>>>> Volker> difficult to merge them into Maxima. My idea is to put them into >>>>> Volker> /share/contrib for now. ] >>>>> >>>>> Is there something wrong with the current implementations of these >>>>> functions? >>>>> >>>>> Ray >>>>> >>>> Ray, as I wrote in my last sentence I do not feel any pressure to >>>> replace the current functions in src. >>>> >>>> After coding the binary splitting versions for the constants I was >>>> curious to check out the other algorithms described in this paper. >>>> >>>> I thought it is worth to share the implementation of these algorithms. >>>> Perhaps someone can use them or is simply interested to study them. >>>> >>>> The binary splitting variants are very fast and in some far future these >>>> implementations might be worth to consider. At the moment I don't want >>>> to risk breaking anything in src for the sake of speed advantages. >>> Ok. I haven't looked at your code yet, but if they're significantly faster and not (much) less accurate, we should consider adding these. I'm sure we can come up with a nice set of tests to cover the code to show that it's good. >>>> Volker >>>> >>>>> ------------------------------------------------------------------------------ >>>>> Slashdot TV. Video for Nerds. Stuff that Matters. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>>>> _______________________________________________ >>>>> Maxima-discuss mailing list >>>>> Max...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >>>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Slashdot TV. Video for Nerds. Stuff that Matters. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Maxima-discuss mailing list >>>> Max...@li... >>>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >>> >>> >>> -- >>> Ray >>> ------------------------------------------------------------------------------ >>> Slashdot TV. Video for Nerds. Stuff that Matters. >>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Maxima-discuss mailing list >>> Max...@li... >>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss > > > ------------------------------------------------------------------------------ > Slashdot TV. Video for Nerds. Stuff that Matters. > http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss |
From: Volker v. N. <vol...@gm...> - 2014-09-21 17:00:05
|
To make the long story short: GCL and ECL currently use GMP version 3. And it seems to me that the binary splitting algorithms combined with Chudnovsky for PI and Brent/McMillan for GAMMA are something like the state of the art at the moment if you want fast programs for high precisions. The combination of GCL/ECL and my proposed implementations show remarkable timings. Your below mentioned 10000 digits of PI are now done in about 5 milliseconds (on my slow 32 bit notebook). On a 64 bit machine with a 3.0 GHz CPU I computed 1 Mio digits in 1.1 seconds. The timings on https://gmplib.org/pi-with-gmp.html for GMP 4.x aren't better. :-} Am 21.09.2014 18:10, schrieb Richard Fateman: > Thanks for the history. This is the kind of thing I would like to see > in the file, nothidden > in the source-code-control comments or entirely missing... > > the original methods were the ones I wrote in > 1974. An efficient computation is usually not very important since a > given run will > probably call the program rarely. Once a high-precision value is > computed, it is cached and all lower-precision requests are done by > rounding down. > > On the other hand, people sometimes compare systems by metrics like "How > long > does it take to compute the 100,000 digit value of pi?" > > in Mathematica 9, Timing[N[Pi,10000]] takes 0.0312 seconds. > (the second time takes 0. seconds). > while in maxima 5.31.2, gcl 2.6.8 > fpprec:10000 takes 0.03 seconds > (the second time takes 0.0 seconds) > then > bfloat(%pi)$ takes 0.22 seconds > (the second time takes 0.0 seconds) > > So obviously Mathematica 9 is 7 times better than Maxima :) > > > I don't know what the underlying bignum integer arithmetic is on the two > systems, > and whether GCL or Mathematica is using GMP arithmetic, and whether if > it is, if it is using > GMP arithmetic tuned to the Pentium 4 I am using (tuned asm code runs > several times > faster than genericC). > and how it compares to MFPR's code, which includes the following > > — Function: int*mpfr_const_log2*(mpfr_t rop, mpfr_rnd_t rnd) > — Function: int*mpfr_const_pi*(mpfr_t rop, mpfr_rnd_t rnd) > — Function: int*mpfr_const_euler*(mpfr_t rop, mpfr_rnd_t rnd) > — Function: int*mpfr_const_catalan*(mpfr_t rop, mpfr_rnd_t rnd) > > The code for computing pi in mpfr can be found here > https://github.com/dschatzberg/mpfr/blob/master/src/const_pi.c > but there is no useful comment there describing the method. > And the time would be heavily dependent, I think, on the GMP version, > anyway. > > In general I defer to people who devote large parts of their waking hours > to optimizing these procedures, and so loading mpfr into Maxima as the > back-end for all of bigfloats would be an interesting prospect. I > have, from time to time, loaded mpfr into lisp for experiments, like can > you multiply polynomials fast by chunking the coefficients into very long > integers. > Except that only some Lisps allow loading foreign libraries, > so not all Maximas (esp. GCL versions) can do it. > > An alternative to loading MPFR is to encourage Volker to do a really > nice job. > Thanks, Volker! > > There is a lot of interest in some circles in "experimental mathematics" > around > computing lots of digits of pi... > > see > http://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula > > I do not know if the splitting formula is the very fastest... I found > this reference > http://www.craig-wood.com/nick/articles/pi-chudnovsky/ > > which is not the most erudite discussion, but he claims his new code is > faster than "BPP". > However most of the time is spent in computing the sqrt of an integer. > > Oh, while I'm talking about foreign-language libraries, there's a bunch > of people > using python developing sympy. It might be plausible to link the sympy > library to > Maxima, if there are features in it that are of interest. I think that > much of thesympy > development to date has been to (mostly naively) duplicate superficial > features of > Maxima, ... superficial because they may depend on proceduresthat are > less effective and run slower. > Eventually there may be situations wherethe python procedure goes > further and/or faster. > I wouldn't expect python generallyto be faster than compiled lisp, > but particular procedures might have better > algorithms, and who knows if python will eventually become "fast". > > There is, for example, nothing to prevent someone including more > features in the > sympy programs for "solve" or "integrate". Although my initial impression > is that sympy authors are fully capable of making bad design choices > that they would not make if they studied the literature or the > "competition". > > RJF > > > > > On 9/21/2014 8:19 AM, Volker van Nek wrote: >> Hendrik, some Details on e and pi. >> >> The computation of e and pi is done by the functions fpe1 resp. fppi1 >> which are - as Richard Fateman already posted - in src/float.lisp . >> >> Before 2007 e was computed by it's Taylor series and pi by the formula >> pi = 6*asin(1/2) and the asin-series. >> >> In 2007 I replaced the asin-formula by the Chudnovsky-formula and for >> the computation of e I implemented a mechanism that combines a lot of >> Taylor summands to one. This mechanism uses the fact that division is >> expensive and multiplication of large integers is much cheaper. >> At that time I didn't know binary splitting. Binary splitting is also a >> mechanism to combine Taylor summands and also uses large integer >> multiplication but in contrast to my approach it is a recursive divide >> and conquer algorithm and provides better timings for high precisions. >> >> Very soon I will replace the current versions by the binary splitting >> variants (which are attached to my first mail in this thread). >> >> Volker van Nek >> >> Am 21.09.2014 09:59, schrieb Jan Hendrik Mueller: >>> Maybe I overlooked that: What was the "old" way maxima computed e and >>> pi? >>> Is there a possibility to look at these codes for non developers? >>> bw >>> Jan >>> >>>> Am 20.09.2014 um 22:53 schrieb Raymond Toy <toy...@gm...>: >>>> >>>> >>>> >>>>> On Sat, Sep 20, 2014 at 2:58 AM, Volker van Nek >>>>> <vol...@gm...> wrote: >>>>> Am 19.09.2014 20:35, schrieb Raymond Toy: >>>>>>>>>>> "Volker" == Volker van Nek <vol...@gm...> writes: >>>>>> Volker> [ I also implemented exp(x), log(x), sin(x) and >>>>>> friends for bigfloat x. >>>>>> Volker> In contrast to fpe1, .. these implementions are still >>>>>> some kind of raw >>>>>> Volker> material and need more error analysis. And of course >>>>>> it's a lot more >>>>>> Volker> difficult to merge them into Maxima. My idea is to >>>>>> put them into >>>>>> Volker> /share/contrib for now. ] >>>>>> >>>>>> Is there something wrong with the current implementations of these >>>>>> functions? >>>>>> >>>>>> Ray >>>>>> >>>>> Ray, as I wrote in my last sentence I do not feel any pressure to >>>>> replace the current functions in src. >>>>> >>>>> After coding the binary splitting versions for the constants I was >>>>> curious to check out the other algorithms described in this paper. >>>>> >>>>> I thought it is worth to share the implementation of these algorithms. >>>>> Perhaps someone can use them or is simply interested to study them. >>>>> >>>>> The binary splitting variants are very fast and in some far future >>>>> these >>>>> implementations might be worth to consider. At the moment I don't want >>>>> to risk breaking anything in src for the sake of speed advantages. >>>> Ok. I haven't looked at your code yet, but if they're significantly >>>> faster and not (much) less accurate, we should consider adding >>>> these. I'm sure we can come up with a nice set of tests to cover >>>> the code to show that it's good. >>>>> Volker >>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>> Slashdot TV. Video for Nerds. Stuff that Matters. >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>>>>> >>>>>> _______________________________________________ >>>>>> Maxima-discuss mailing list >>>>>> Max...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >>>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> Slashdot TV. Video for Nerds. Stuff that Matters. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>>>> >>>>> _______________________________________________ >>>>> Maxima-discuss mailing list >>>>> Max...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >>>> >>>> >>>> -- >>>> Ray >>>> ------------------------------------------------------------------------------ >>>> >>>> Slashdot TV. Video for Nerds. Stuff that Matters. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >>>> >>>> _______________________________________________ >>>> Maxima-discuss mailing list >>>> Max...@li... >>>> https://lists.sourceforge.net/lists/listinfo/maxima-discuss >> >> >> ------------------------------------------------------------------------------ >> >> Slashdot TV. Video for Nerds. Stuff that Matters. >> http://pubads.g.doubleclick.net/gampad/clk?id=160591471&iu=/4140/ostg.clktrk >> >> _______________________________________________ >> Maxima-discuss mailing list >> Max...@li... >> https://lists.sourceforge.net/lists/listinfo/maxima-discuss > > |
From: Volker v. N. <vol...@gm...> - 2014-09-21 17:28:14
|
Am 21.09.2014 18:10, schrieb Richard Fateman: > > I do not know if the splitting formula is the very fastest... I found > this reference > http://www.craig-wood.com/nick/articles/pi-chudnovsky/ > > which is not the most erudite discussion, but he claims his new code is > faster than "BPP". > However most of the time is spent in computing the sqrt of an integer. > Short remark: That was the page where I found the link to the paper of Haible and Papanikolaou. Yes, the sqrt part of the Chudnovsky formula is very time comsuming. For this problem I found an optimization. (I spent a lot of time with this problem and experimented with some alternatives. A binary splitted Taylor-series for the sqrt showed very good results but I ended I with an optimized Heron.) |
From: Henry B. <hb...@pi...> - 2014-09-21 17:37:03
|
Can't you run certain spigot algorithms in parallel with different starting points? At 09:58 AM 9/21/2014, ΣÏαῦÏÎ¿Ï ÎακÏÎ¬ÎºÎ·Ï wrote: >Why should it? Memory is plentiful these days, and spigot algorithms are slower than the alternative. > >On Sun, Sep 21, 2014 at 11:47 AM, Henry Baker <hb...@pi...> wrote: >How come Maxima doesn't use "spigot algorithms" for pi, e, ln(2), etc. ? > >https://en.wikipedia.org/wiki/Spigot_algorithm |
From: Stavros M. (Σ. Μ. <mac...@al...> - 2014-09-21 18:10:03
|
"Spigot" algorithms are left-to-right. There are "digit N" algorithms, but they are even slower. On Sun, Sep 21, 2014 at 1:36 PM, Henry Baker <hb...@pi...> wrote: > Can't you run certain spigot algorithms in parallel with different > starting points? > > At 09:58 AM 9/21/2014, Σταῦρος Μακράκης wrote: > >Why should it? Memory is plentiful these days, and spigot algorithms are > slower than the alternative. > > > >On Sun, Sep 21, 2014 at 11:47 AM, Henry Baker <hb...@pi...> > wrote: > >How come Maxima doesn't use "spigot algorithms" for pi, e, ln(2), etc. ? > > > >https://en.wikipedia.org/wiki/Spigot_algorithm > > |
From: Raymond T. <toy...@gm...> - 2014-09-22 04:10:18
|
On Sun, Sep 21, 2014 at 9:10 AM, Richard Fateman <fa...@be...> wrote: > Thanks for the history. This is the kind of thing I would like to see > in the file, not hidden > in the source-code-control comments or entirely missing... > I somewhat agree and somewhat disagree. When a new algorithm is being implemented, I do find it useful to have both versions available. It's very useful for testing and comparing the two. But after a while when the new one is fully tested, keeping all old versions around just bloats the file and makes it harder to read. So in that aspect, I want the old version to be removed. This is where good checkin comments can help a lot to find when the old version is deleted. Tags and branches that are well named are also helpful. Or maybe even forcing developers to produce pull requests as done on some projects and available on github and others would be beneficial. (But I'm not currently advocating this latter approach.) -- Ray |
From: Barton W. <wi...@un...> - 2014-09-22 13:09:41
|
For a speed advantage, binary splitting algorithms generally require multiplication that is faster than O(n * m), I think. As far as I know, Maxima doesn't have fast integer multiplication. An exception is the factorial function--the speed advantage for a splitting method comes from organizing the multiplications in a way that reduces the multiplication times (and leverage that multiplication by two is super fast). Maxima does have this method--I think Richard Fateman contributed it some years ago. Applied to fixed length integers, I think the splitting method for factorials doesn't have all that much advantage over a simple-minded approach? Big Oh estimates that assume fixed length integers aren't sufficient to compare algorithms that use non fixed length integers. Maybe Maxima needs fast integer multiplication to allow binary splitting methods to be viable? --Barton |
From: Raymond T. <toy...@gm...> - 2014-09-23 05:48:45
|
On Mon, Sep 22, 2014 at 5:55 AM, Barton Willis <wi...@un...> wrote: > For a speed advantage, binary splitting algorithms generally require > multiplication that is faster than > > O(n * m), I think. As far as I know, Maxima doesn't have fast integer > multiplication. > > > AFAICT, Maxima just does the bignum multiplication and lets the underlying lisp do any optimization. clisp and gcl have quite fast multipliers based on fast algorithms (clisp) or gmp (gcl). sbcl and cmucl can use Karatsuba multiplication for sufficiently large bignums. I don't know what sbcl uses, but cmucl switches to Karatsuba when the numbers are about 512 bits or larger. > An exception is the factorial function--the speed advantage for > a splitting method comes from organizing the multiplications in a way that > reduces the multiplication times (and leverage that multiplication by two > is super fast). Maxima does have this method--I think Richard Fateman > contributed it some years ago. Applied to fixed length integers, I think > the splitting method for factorials doesn't have all that much advantage > over a simple-minded approach? > > > Big Oh estimates that assume fixed length integers aren't sufficient > to compare algorithms that use > > non fixed length integers. Maybe Maxima needs fast integer multiplication > to allow binary splitting methods to be viable? > > > --Barton > > > > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > > -- Ray |
From: Raymond T. <toy...@gm...> - 2014-09-23 16:28:33
|
>>>>> "Volker" == Volker van Nek <vol...@gm...> writes: >> >> I would look at what bigfloatp does in the case where fpprec is lower >> than the actual precision of the bigfloat. It adds the value of *m >> into the exponent. Volker> Yeah! bigfloatp does it. Now I understand the mechanism. Fantastic! I think it would be beneficial if fpround returned both the rounded value and *m (as multiple values) instead of what it does now. It makes it clearer on what is supposed to happen, and one less global special variable is good. Ray |