From: Sam Steingold <sds@gn...>  20040830 15:16:42

> * Bruno Haible <oehab@...> [20040830 16:07:08 +0200]: > > Sam wrote: >> You said that on some rare hardware GMP is faster than CLN/CLISP. >> Would it make sense to link CLISP against GMP? > > You see that the timing differences are minimal. indeed. nevertheless it appears that GMP is now actively maintained, so it might make sense to outsource numerics to GMP  this way we will benefit from any advance in the state of the art for free! linking against libgmp.so will also reduce image size.  Sam Steingold (http://www.podval.org/~sds) running w2k <http://www.camera.org>; <http://www.iris.org.il>; <http://www.memri.org/>; <http://www.mideasttruth.com/>; <http://www.honestreporting.com>; Any programming language is at its best before it is implemented and used. 
From: Bruno Haible <bruno@cl...>  20040818 18:07:46

Sam asked: > > CLN has a number of better algorithms than CLISP. For example, > > multiplication of equallysized integers is O(N^1.6) in CLISP (through > > Karatsuba algorithm), but O(N^1.3) in CLN (through FFT). Also division > > is much faster in CLN. > > any plans to copy this code back to CLISP? I don't plan to do this, as  Most of the researchers who need very fast bignums do their work in CA systems like Magma or Pari, or write C/C++ programs. _Very_ few of them use Lisp.  The benchmarks show that usual Common Lisp users are already satisfied even with O(N^2) multiplication. OTOH, if Richard Fateman or Tim Daly were to convincingly argue that this is a bottleneck for a CA system, then one should look into it. Bruno 
From: Bruno Haible <bruno@cl...>  20040818 20:05:48

Sam wrote: > >  Most of the researchers who need very fast bignums do their work > > in CA systems like Magma or Pari, or write C/C++ programs. _Very_ > > few of them use Lisp. > > so those few Maxima/CLISP users should be pushed to switch. If Maxima/CLISP is slow, it's certainly _not_ because of clisp's bignums. It's because Maxima uses an nonalgorithmic "try this, try that" evaluation model that spends most of its time looking for simplification opportunities, rather than doing real computation. I used (or tried to use) Macsyma for my diploma thesis, and it was about 1000 times slower than plain Common Lisp on rational number computations. > >  The benchmarks show that usual Common Lisp users are already > > satisfied even with O(N^2) multiplication. > > I don't understand this. When you look at http://www.cliki.net/Performance Benchmarks you see that, apparently, many CL implementations are used despite of their miserable bignum performance. > > OTOH, if Richard Fateman or Tim Daly were to convincingly argue that > > this is a bottleneck for a CA system, then one should look into it. > > I doubt that either of them would bother to make a case. > They will just use a different tool. Or, more likely, they will not trust the speed of the bignums in the underlying implementation, and write their own bignum package on top of it. Do you have details about how Maxima and Axiom really do it? 1) bignums 2) rational numbers 3) floatingpoint numbers? > You said that on some rare hardware GMP is faster than CLN/CLISP. > Would it make sense to link CLISP against GMP? It requires a lot of prior work. Namely, CLISP uses bigendian storage order for the limbs of a bignum, whereas GMP uses littleendian storage order. I did the required work for CLN, but I don't think that fast bignums on ia64 and rs6000 for CLISP are worth the work. Bruno 
From: Sam Steingold <sds@gn...>  20040818 20:40:27

> * Bruno Haible <oehab@...> [20040818 22:02:46 +0200]: > >> >  The benchmarks show that usual Common Lisp users are already >> > satisfied even with O(N^2) multiplication. >> >> I don't understand this. > > When you look at http://www.cliki.net/Performance Benchmarks > you see that, apparently, many CL implementations are used despite of their > miserable bignum performance. bignums are perceived as a "failsafe" mechanism for fixnum overflow, not as a "tool in itself", so they are not optimized as they should be (cf. the requirement that ARRAYTOTALSIZELIMIT is a fixnum). >> > OTOH, if Richard Fateman or Tim Daly were to convincingly argue that >> > this is a bottleneck for a CA system, then one should look into it. >> >> I doubt that either of them would bother to make a case. >> They will just use a different tool. > > Or, more likely, they will not trust the speed of the bignums in the > underlying implementation, and write their own bignum package on top > of it. why would anyone implement anything himself if an adequate offtheshelf tool is already available?! besides, I doubt that something written "on top of clisp" can be as fast as clisp internals. > Do you have details about how Maxima and Axiom really do it? > 1) bignums > 2) rational numbers > 3) floatingpoint numbers? I have always assumed they relied on the underlying implementation. >> You said that on some rare hardware GMP is faster than CLN/CLISP. >> Would it make sense to link CLISP against GMP? > > It requires a lot of prior work. Namely, CLISP uses bigendian storage > order for the limbs of a bignum, whereas GMP uses littleendian > storage order. I did the required work for CLN, but I don't think that > fast bignums on ia64 and rs6000 for CLISP are worth the work. well, I just barely managed to get us an x86_64 instead of an ia64. actually, it is conceivable that the bums will still buy an ia64. so ia64 is not as irrelevant as it seems...  Sam Steingold (http://www.podval.org/~sds) running w2k <http://www.camera.org>; <http://www.iris.org.il>; <http://www.memri.org/>; <http://www.mideasttruth.com/>; <http://www.honestreporting.com>; My inferiority complex is the only thing I can be proud of. 
From: Bruno Haible <bruno@cl...>  20040818 20:52:01

Sam wrote: > > Do you have details about how Maxima and Axiom really do it? > > 1) bignums > > 2) rational numbers > > 3) floatingpoint numbers? > > I have always assumed they relied on the underlying implementation. Axiom's rational numbers and Maxima's floatingpoint numbers are not based on the corresponding facility of the underlying Lisp. What about the bignums? > well, I just barely managed to get us an x86_64 You can write an ari*.d file for a new CPU in a day (after getting familiar with the CPU's assembly language). You can also use some tricks from gmp while doing that. Bruno 
From: Bruno Haible <bruno@cl...>  20040830 14:10:33

Sam wrote: > You said that on some rare hardware GMP is faster than CLN/CLISP. > Would it make sense to link CLISP against GMP? Here are benchmarks of CLN, built on a PowerPC machine a) with GMP b) without GMP (i.e. with the same lowlevel routines as in clisp, namely for PowerPC the portable C version). N = 100 Function CLN1.1.8 CLN1.1.8 no gmp gmp4.1.2 multiplication 0.0000069 0.0000066 division 0.0000047 0.0000047 isqrt 0.0000024 0.0000024 gcd 0.0000163 0.0000163 N = 1000 Function CLN1.1.8 CLN1.1.8 no gmp gmp4.1.2 multiplication 0.000197 0.000192 division 0.000179 0.000179 isqrt 0.000021 0.000021 gcd 0.000603 0.000600 N = 10000 Function CLN1.1.8 CLN1.1.8 no gmp gmp4.1.2 multiplication 0.0082 0.0081 division 0.0156 0.0155 isqrt 0.0009 0.0010 gcd 0.0449 0.0445 N = 100000 Function CLN1.1.8 CLN1.1.8 no gmp gmp4.1.2 multiplication 0.087 0.087 division 0.27 0.27 isqrt 0.132 0.133 gcd 3.28 3.26 You see that the timing differences are minimal. Bruno 
From: Sam Steingold <sds@gn...>  20040830 15:16:42

> * Bruno Haible <oehab@...> [20040830 16:07:08 +0200]: > > Sam wrote: >> You said that on some rare hardware GMP is faster than CLN/CLISP. >> Would it make sense to link CLISP against GMP? > > You see that the timing differences are minimal. indeed. nevertheless it appears that GMP is now actively maintained, so it might make sense to outsource numerics to GMP  this way we will benefit from any advance in the state of the art for free! linking against libgmp.so will also reduce image size.  Sam Steingold (http://www.podval.org/~sds) running w2k <http://www.camera.org>; <http://www.iris.org.il>; <http://www.memri.org/>; <http://www.mideasttruth.com/>; <http://www.honestreporting.com>; Any programming language is at its best before it is implemented and used. 
From: Bruno Haible <bruno@cl...>  20040830 15:46:50

Sam wrote: > nevertheless it appears that GMP is now actively maintained, The last GMP release is 20 months old. Does that count as "actively maintained"?? Bruno 
From: Sam Steingold <sds@gn...>  20040830 16:11:14

> * Bruno Haible <oehab@...> [20040830 17:43:27 +0200]: > > Sam wrote: >> nevertheless it appears that GMP is now actively maintained, > > The last GMP release is 20 months old. how long ago was the latest change in the CLISP code that reimplements the GMP functionality? > Does that count as "actively maintained"?? I don't know and I don't care. You decide yourself. I prefer to outsource as much as possible.  Sam Steingold (http://www.podval.org/~sds) running w2k <http://www.camera.org>; <http://www.iris.org.il>; <http://www.memri.org/>; <http://www.mideasttruth.com/>; <http://www.honestreporting.com>; Linux  find out what you've been missing while you've been rebooting Windows. 
From: Sam Steingold <sds@gn...>  20040818 19:06:13

> * Bruno Haible <oehab@...> [20040818 20:04:58 +0200]: > >  Most of the researchers who need very fast bignums do their work > in CA systems like Magma or Pari, or write C/C++ programs. _Very_ > few of them use Lisp. so those few Maxima/CLISP users should be pushed to switch. >  The benchmarks show that usual Common Lisp users are already > satisfied even with O(N^2) multiplication. I don't understand this. > OTOH, if Richard Fateman or Tim Daly were to convincingly argue that > this is a bottleneck for a CA system, then one should look into it. I doubt that either of them would bother to make a case. They will just use a different tool. You said that on some rare hardware GMP is faster than CLN/CLISP. Would it make sense to link CLISP against GMP?  Sam Steingold (http://www.podval.org/~sds) running w2k <http://www.camera.org>; <http://www.iris.org.il>; <http://www.memri.org/>; <http://www.mideasttruth.com/>; <http://www.honestreporting.com>; There is an exception to every rule, including this one. 