|
From: Nicolas N. <Nic...@iw...> - 2004-05-07 08:43:17
|
si...@EE... writes:
> Hi Nicolas;
>
> Wow. You're doing some impressive work. I like your ideas.
Thank you very much.
> But ... there is one thing you may not be aware of. IIRC, someone
> correct me if I'm wrong, one of the main reasons why we did matlisp the
> way it is was to avoid writing such routines in lisp altogether. When I
> first contacted Ray, he had already worked out the generic foreign
> wrapper suitable for fortran code. I then wrote a script to generate the
> wrapper code automagically from the lapack files. So the idea was not to
> write basic matrix operatinos at all.
I think that Matlisp is an important proof of concept, namely that CL
really can take full profit of foreign libraries. Some time ago, I spoke
with Folkmar Bornemann (a numerical analyst at the Technical University in
Munich) about Femlisp, and he advocated the use of Matlab/Femlab instead.
One of his reasons was that Matlab has access to the high-performance ATLAS
library. At that time, I did not know that Matlisp could use ATLAS, but
now I feel very comfortable that I can get this speed also from within CL
if I really should need it.
I see the BLAS development in Femlisp as another proof of concept. My
goals were:
1. It should be sufficient for Femlisp's needs thus making Femlisp more or
less running with every ANSI CL. This is possible because Femlisp uses
iterative solvers, which need only rather few basic operations for full
matrices.
2. The technique should be extensible to sparse matrices. In fact, this
will be the most important step which I plan to tackle in the next
months. As much as I know, also for Fortran there is no standard
library for sparse matrices availableq. Furthermore, Femlisp has very
special needs (dynamic modifications of matrix structure, etc.).
3. It should be as concise as possible in source code, and it should
nevertheless be as fast as you can get with CL. Especially, at this
point the dynamic features of CL fit wonderful in the picture.
Point 3 made it necessary that I deviated from Matlisp in several ways.
E.g. I used a different class representation with more general class names,
I removed optional parameters which have a performance drawback for the
method call, etc. However, the changes are not large and it should be easy
to transfer code between Matlisp and "CL-Matlisp". The most noticeable
change is that I did not use the [] read macro, but implemented a simpler
one dispatching on #m. Another one is that I generally use MREF/VREF
instead of MATRIX-REF.
> Having said that, matlisp has evolved along the way, with contributions
> from select individuals like yourself. I think it may be time to
> consider more drastic functionality of the kind you're suggesting. I,
> unfortunately, personally would not be able to participate in the
> development but I think there are people out there who might be.
>
> My two cents, Tunc
In any case, I think that we should be aware of each others development.
Maybe Matlisp can try to improve its method call overhead a little bit
along the lines shown in Femlisp. For example, you could check matrix
compatibility without a :BEFORE method, or you could remove optional
parameters.
+--
|Example: I have generic functions GEMM-NN!, GEMM-NT!, GEMM-TN!, GEMM-TT!
|and a dispatch function
|
| (defun gemm! (alpha x y beta z &optional (job :nn))
| "Dispatches on the optional job argument (member :nn :tn :nt :tt) and
| calls the corresponding generic function, e.g. GEMM-NN!."
| (ecase job
| (:nn (gemm-nn! alpha x y beta z))
| (:nt (gemm-nt! alpha x y beta z))
| (:tn (gemm-tn! alpha x y beta z))
| (:tt (gemm-tt! alpha x y beta z))))
|
|to keep the Matlisp interface.
+--
Maybe it is also possible with the help of CL vendors and developers to
reduce the FFI call overhead.
For the more distant future, I dream of a seamless transition between
Matlisp and CL-Matlisp, e.g. along the following lines:
1. Use the CL-implemented version, if there is any, and if the matrices
are small.
2. Alternatively, use an FFI-call if the external library is available or
if the matrices are large
3. Otherwise, throw an error.
However, this development is too early now, at least for me. While
implementing the BLAS operations sparse matrix, there will probably be
several important changes and I need a code which is as lightweight as
possible for the moment.
Yours, Nicolas.
|