You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
(2) |
Jun
(2) |
Jul
(3) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(8) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
2002 |
Jan
(2) |
Feb
(7) |
Mar
(14) |
Apr
|
May
|
Jun
(16) |
Jul
(7) |
Aug
(5) |
Sep
(28) |
Oct
(9) |
Nov
(26) |
Dec
(3) |
2003 |
Jan
|
Feb
(6) |
Mar
(4) |
Apr
(16) |
May
|
Jun
(8) |
Jul
(1) |
Aug
(2) |
Sep
(2) |
Oct
(33) |
Nov
(13) |
Dec
|
2004 |
Jan
(2) |
Feb
(16) |
Mar
|
Apr
(2) |
May
(35) |
Jun
(8) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(8) |
Dec
(21) |
2005 |
Jan
(7) |
Feb
|
Mar
|
Apr
(1) |
May
(8) |
Jun
(4) |
Jul
(5) |
Aug
(18) |
Sep
(2) |
Oct
|
Nov
(3) |
Dec
(31) |
2006 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
(1) |
Jun
(7) |
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
(2) |
Apr
(11) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2008 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(10) |
2009 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
(2) |
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(5) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(1) |
2014 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ernst v. W. <ev...@in...> - 2004-01-26 22:27:43
|
Hello, some years ago I downloaded Matlisp-1.0b, which ran fine on ACLv501. With ACLv6, I always had the following error: ; Loading d:\Common Lisp\matlisp1.0b\ACL6\matlisp-1.0b\config.lisp ; Loading d:\Common Lisp\matlisp1.0b\ACL6\matlisp-1.0b\system.dcl Error: No translation rule for #p"matlisp:;code;packages.fasl" [condition type: SIMPLE-ERROR] ACLv501 cannot fins translation rule for #p"matlisp:;code;packages.fasl" either, so it may be best to ask someone with more matlisp knowledge than I have. I have looked and found Matlisp-1.0b to be the newest version, so I don't think a new download will do the trick. Who knows why I get this problem and what I should do about it? Kind regards, Ernst van Waning |
From: Nicolas N. <Nic...@iw...> - 2003-11-25 09:12:46
|
Raymond Toy <to...@rt...> writes: > >>>>> "Nicolas" == Nicolas Neuss <Nic...@iw...> writes: > > Nicolas> OK. But if GC is done by the same thread, my simple mind > Nicolas> would think that switching it off means setting one > Nicolas> global variable to NIL. > > Yes, I think that's true. I don't use a multithreaded system, though, > so I don't know. Even switching-off GC should probably not be necessary if everything is working fine. I guess that GC is triggered when objects want to get heap-allocated. But for these low-level calls no consing should appear. (Admitted, this will probably make foreign-function interfaces of CL implementations tricky. But it would give us a seamless cooperation with the Fortran and C world.) > Nicolas> bet that I cannot bear too much of overhead here. Could be that I will > Nicolas> have to handle the very small blocks (1x1--3x3) even without any function > Nicolas> call. > > I think even normal BLAS overhead would hurt quite a bit if your > blocks are this small. Putting a 5 args, say, onto the call stack > probably costs as much as the computations in such a small block. Yes, you are right here. I don't yet have a perfect solution. But the problem is not that much different for C/C++ and so on, and with the power of Lisp I hope to do at least as well as those. Up to now I have accepted a lot of performance degradation at several places. But I want to announce Femlisp to the scientific computing community next year and therefore cannot do this any longer. Nicolas. |
From: Raymond T. <to...@rt...> - 2003-11-24 19:02:54
|
>>>>> "Nicolas" == Nicolas Neuss <Nic...@iw...> writes: Nicolas> OK. But if GC is done by the same thread, my simple mind Nicolas> would think that switching it off means setting one Nicolas> global variable to NIL. Yes, I think that's true. I don't use a multithreaded system, though, so I don't know. >> A factor of 2 will be very difficult to achieve, since a Lisp function >> call basically loads up a bunch of pointers and calls the function. We >> need to compute addresses, do the without-gc/unwind-protect stuff, load >> up the registers for a foreign call and then call it. Nicolas> Yes. Here I assume (in the direction to what Duane posted) that also the Nicolas> Lisp compiler works with addresses and has them readily available. Yes, we have addresses, but need to figure out from the lisp object address where the actual data is. I would think in a threaded system, locking out GC is even more important since other threads can start GC even if the current thread wouldn't. But I'll look to see what we can do. Nicolas> bet that I cannot bear too much of overhead here. Could be that I will Nicolas> have to handle the very small blocks (1x1--3x3) even without any function Nicolas> call. I think even normal BLAS overhead would hurt quite a bit if your blocks are this small. Putting a 5 args, say, onto the call stack probably costs as much as the computations in such a small block. Ray |
From: Nicolas N. <Nic...@iw...> - 2003-11-24 18:20:39
|
si...@EE... writes: > Here are the numbers on my ACL6.1/WinXP system: These are the culprits: > DDOT-short: 89.36 MFLOPS > BLAS-DDOT-short: 34.01 MFLOPS > DAXPY-short: 75.32 MFLOPS > BLAS-DAXPY-short: 31.33 MFLOPS It means that calling the BLAS code results in 2-3 times slower code than using the Lisp functions. MFLOPS=10^6 FLOP/second. (Maybe I should have written it as MFLOP/s?) Interestingly this means that ACL has essentially the same problem as CMUCL. > and for reference here was the original figures you posted: > > > DDOT-long: 271.15 MFLOPS > DDOT-short: 679.58 MFLOPS > DAXPY-long: 143.55 MFLOPS > DAXPY-short: 488.06 MFLOPS > > BLAS-DDOT-long: 267.10 MFLOPS > BLAS-DDOT-short: 63.31 MFLOPS > BLAS-DAXPY-long: 149.13 MFLOPS > BLAS-DAXPY-short: 61.01 MFLOPS Thinking about a new computer? :-) But I admit that my personal laptop is not much faster than yours... Nicolas. |
From: Nicolas N. <Nic...@iw...> - 2003-11-24 18:12:05
|
Raymond Toy <to...@rt...> writes: > It's not multithreading, per se. It's because we can't have GC > suddenly move the vectors before doing the foreign call, otherwise the > foreign function will be reading and writing to some random place in > memory. OK. But if GC is done by the same thread, my simple mind would think that switching it off means setting one global variable to NIL. > A factor of 2 will be very difficult to achieve, since a Lisp function > call basically loads up a bunch of pointers and calls the function. We > need to compute addresses, do the without-gc/unwind-protect stuff, load > up the registers for a foreign call and then call it. Yes. Here I assume (in the direction to what Duane posted) that also the Lisp compiler works with addresses and has them readily available. > Nicolas> and choosing Matlisp data for the blocks would be a possibility. But the > Nicolas> blocks can be small, therefore I cannot make compromises when operating on > Nicolas> those blocks. > > I assume you've profiled it so that the small blocks really are the > bottleneck? I'm still more or less in the design phase. I have now a compact row-ordered scheme (which is as fast as the C version) and want to make it more general without destroying too much performance. It is a very safe bet that I cannot bear too much of overhead here. Could be that I will have to handle the very small blocks (1x1--3x3) even without any function call. Nicolas. |
From: <si...@EE...> - 2003-11-24 17:51:08
|
Here are the numbers on my ACL6.1/WinXP system: DDOT-long: 26.38 MFLOPS DDOT-short: 89.36 MFLOPS DAXPY-long: 20.31 MFLOPS DAXPY-short: 75.32 MFLOPS BLAS-DDOT-long: 74.48 MFLOPS BLAS-DDOT-short: 34.01 MFLOPS BLAS-DAXPY-long: 36.24 MFLOPS BLAS-DAXPY-short: 31.33 MFLOPS and for reference here was the original figures you posted: DDOT-long: 271.15 MFLOPS DDOT-short: 679.58 MFLOPS DAXPY-long: 143.55 MFLOPS DAXPY-short: 488.06 MFLOPS BLAS-DDOT-long: 267.10 MFLOPS BLAS-DDOT-short: 63.31 MFLOPS BLAS-DAXPY-long: 149.13 MFLOPS BLAS-DAXPY-short: 61.01 MFLOPS I still don't understand the problem since blas seems to be doing better than native lisp in both my test and your test. Tunc ----- Original Message ----- From: Raymond Toy <to...@rt...> Date: Monday, November 24, 2003 8:35 am Subject: Re: [Matlisp-users] Calling Fortran routines on short arrays > >>>>> "Nicolas" == Nicolas Neuss <Nicolas.Neuss@iwr.uni- > heidelberg.de> writes: > > Nicolas> Raymond Toy <to...@rt...> writes: > >> I'll try to look into this. There's probably some > improvement to be > >> had, but I doubt we can improve it enough for you. I think the > >> overhead comes from computing the necessary addresses, and > also having > >> to turn off GC during the computation. IIRC, this involves an > >> unwind-protect which does add quite a bit of code. > > Nicolas> Yes, you are right. I see this now. If switching > off multithreading is > Nicolas> expensive, there is a problem here. I don't know > enough of these things to > Nicolas> help you here. > > It's not multithreading, per se. It's because we can't have GC > suddenly move the vectors before doing the foreign call, otherwise the > foreign function will be reading and writing to some random place in > memory. > > >> Note that I also noticed long ago that a simple vector add > in Lisp was > >> at least as fast as calling BLAS. > > Nicolas> Probably this was before I started using Matlisp. > > Yeah, probably before matlisp became matlisp. > > Nicolas> I will have to do this at least for a small part of > the routines, if the > Nicolas> foreign call cannot be achieved with really little > overhead (say two times > Nicolas> a Lisp function call). I want to implement flexible > sparse block matrices, > > A factor of 2 will be very difficult to achieve, since a Lisp function > call basically loads up a bunch of pointers and calls the function. > We need to compute addresses, do the without-gc/unwind-protect stuff, > load up the registers for a foreign call and then call it. > > Nicolas> and choosing Matlisp data for the blocks would be a > possibility. But the > Nicolas> blocks can be small, therefore I cannot make > compromises when operating on > Nicolas> those blocks. > > I assume you've profiled it so that the small blocks really are the > bottleneck? > > Nicolas> P.S.: BTW, how does ACL perform in this respect? > Just today I read Duane > > Don't know since I don't have a version of ACL that can run matlisp. > > Ray > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Matlisp-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matlisp-users > |
From: Raymond T. <to...@rt...> - 2003-11-24 16:35:49
|
>>>>> "Nicolas" == Nicolas Neuss <Nic...@iw...> writes: Nicolas> Raymond Toy <to...@rt...> writes: >> I'll try to look into this. There's probably some improvement to be >> had, but I doubt we can improve it enough for you. I think the >> overhead comes from computing the necessary addresses, and also having >> to turn off GC during the computation. IIRC, this involves an >> unwind-protect which does add quite a bit of code. Nicolas> Yes, you are right. I see this now. If switching off multithreading is Nicolas> expensive, there is a problem here. I don't know enough of these things to Nicolas> help you here. It's not multithreading, per se. It's because we can't have GC suddenly move the vectors before doing the foreign call, otherwise the foreign function will be reading and writing to some random place in memory. >> Note that I also noticed long ago that a simple vector add in Lisp was >> at least as fast as calling BLAS. Nicolas> Probably this was before I started using Matlisp. Yeah, probably before matlisp became matlisp. Nicolas> I will have to do this at least for a small part of the routines, if the Nicolas> foreign call cannot be achieved with really little overhead (say two times Nicolas> a Lisp function call). I want to implement flexible sparse block matrices, A factor of 2 will be very difficult to achieve, since a Lisp function call basically loads up a bunch of pointers and calls the function. We need to compute addresses, do the without-gc/unwind-protect stuff, load up the registers for a foreign call and then call it. Nicolas> and choosing Matlisp data for the blocks would be a possibility. But the Nicolas> blocks can be small, therefore I cannot make compromises when operating on Nicolas> those blocks. I assume you've profiled it so that the small blocks really are the bottleneck? Nicolas> P.S.: BTW, how does ACL perform in this respect? Just today I read Duane Don't know since I don't have a version of ACL that can run matlisp. Ray |
From: Nicolas N. <Nic...@iw...> - 2003-11-24 16:13:03
|
Raymond Toy <to...@rt...> writes: > >>>>> "Nicolas" == Nicolas Neuss <Nic...@iw...> writes: > > > Nicolas> From the numbers it is obvious that the call is even much more expensive > Nicolas> than a daxpy for 256 double-floats. How comes? > > >> as the daxpy for the case +N-short+=256, while calling Lisp functions is > >> much faster. Is it possible to cut down these costs? > >> > >> Thanks, Nicolas. > >> > > I'll try to look into this. There's probably some improvement to be > had, but I doubt we can improve it enough for you. I think the > overhead comes from computing the necessary addresses, and also having > to turn off GC during the computation. IIRC, this involves an > unwind-protect which does add quite a bit of code. Yes, you are right. I see this now. If switching off multithreading is expensive, there is a problem here. I don't know enough of these things to help you here. > Note that I also noticed long ago that a simple vector add in Lisp was > at least as fast as calling BLAS. Probably this was before I started using Matlisp. > However, having everything go through FFI to BLAS at least allows us to > take advantage of any special libraries that might be available. > > I, however, am not opposed to implementing the BLAS in Lisp. Other > LAPACK routines will still use the original BLAS, and Lisp code can > get the faster versions. Will need thinking, design, and > experimentation. I will have to do this at least for a small part of the routines, if the foreign call cannot be achieved with really little overhead (say two times a Lisp function call). I want to implement flexible sparse block matrices, and choosing Matlisp data for the blocks would be a possibility. But the blocks can be small, therefore I cannot make compromises when operating on those blocks. Thanks, Nicolas. P.S.: BTW, how does ACL perform in this respect? Just today I read Duane writing about interoperability of ACL with C and C++. If the overhead we are suffering from is necessary in general, this might be quite a problem for some applications. |
From: Nicolas N. <Nic...@iw...> - 2003-11-24 16:11:49
|
-- Dr. Nicolas Neuss IWR, INF 368, D-69120 Heidelberg Email: Nic...@IW... WWW: <http://www.iwr.uni-heidelberg.de/~Nicolas.Neuss> |
From: Raymond T. <to...@rt...> - 2003-11-24 15:43:19
|
>>>>> "Nicolas" == Nicolas Neuss <Nic...@iw...> writes: Nicolas> From the numbers it is obvious that the call is even much more expensive Nicolas> than a daxpy for 256 double-floats. How comes? >> as the daxpy for the case +N-short+=256, while calling Lisp functions is >> much faster. Is it possible to cut down these costs? >> >> Thanks, Nicolas. >> I'll try to look into this. There's probably some improvement to be had, but I doubt we can improve it enough for you. I think the overhead comes from computing the necessary addresses, and also having to turn off GC during the computation. IIRC, this involves an unwind-protect which does add quite a bit of code. Note that I also noticed long ago that a simple vector add in Lisp was at least as fast as calling BLAS. However, having everything go through FFI to BLAS at least allows us to take advantage of any special libraries that might be available. I, however, am not opposed to implementing the BLAS in Lisp. Other LAPACK routines will still use the original BLAS, and Lisp code can get the faster versions. Will need thinking, design, and experimentation. Ray |
From: Nicolas N. <Nic...@iw...> - 2003-11-22 16:58:18
|
Hello, Rereading my mail I see that I expressed myself badly again. Corrections: > Hello, > > I am trying to find out if it is possible to call Fortran BLAS routines Of course, it is possible. But is it possible without such a tremendeous performance loss? > also on short vectors. I am running in the following problem: > > I have put a test program on > > http://cox.iwr.uni-heidelberg.de/~neuss/misc/mflop-new.lisp > > When I test the Lisp ddot/daxpy code I get: > > DDOT-long: 271.15 MFLOPS > DDOT-short: 679.58 MFLOPS > DAXPY-long: 143.55 MFLOPS > DAXPY-short: 488.06 MFLOPS > > But when I call the Matlisp routines (not via CLOS!), I get > > BLAS-DDOT-long: 267.10 MFLOPS > BLAS-DDOT-short: 63.31 MFLOPS > BLAS-DAXPY-long: 149.13 MFLOPS > BLAS-DAXPY-short: 61.01 MFLOPS > > The reason is probably that the external function call is almost as costly From the numbers it is obvious that the call is even much more expensive than a daxpy for 256 double-floats. How comes? > as the daxpy for the case +N-short+=256, while calling Lisp functions is > much faster. Is it possible to cut down these costs? > > Thanks, Nicolas. > |
From: Nicolas N. <Nic...@iw...> - 2003-11-22 15:08:02
|
Hello, I am trying to find out if it is possible to call Fortran BLAS routines also on short vectors. I am running in the following problem: I have put a test program on http://cox.iwr.uni-heidelberg.de/~neuss/misc/mflop-new.lisp When I test the Lisp ddot/daxpy code I get: DDOT-long: 271.15 MFLOPS DDOT-short: 679.58 MFLOPS DAXPY-long: 143.55 MFLOPS DAXPY-short: 488.06 MFLOPS But when I call the Matlisp routines (not via CLOS!), I get BLAS-DDOT-long: 267.10 MFLOPS BLAS-DDOT-short: 63.31 MFLOPS BLAS-DAXPY-long: 149.13 MFLOPS BLAS-DAXPY-short: 61.01 MFLOPS The reason is probably that the external function call is almost as costly as the daxpy for the case +N-short+=256, while calling Lisp functions is much faster. Is it possible to cut down these costs? Thanks, Nicolas. |
From: Robbie S. <rd...@me...> - 2003-11-13 22:55:08
|
This is just the ffi interface for SBCL. It should go in the src directory of matlisp. --Robbie |
From: Robbie S. <rd...@me...> - 2003-11-13 22:53:28
|
Hi everyone,=20 I patched up matlisp to work with SBCL. =A0It should now run on x86 linux= and=20 Mac OS X using SBCL. Included is a patch file against the CVS sources. =A0Since I changed=20 configure.in, the configure script will has to be regenerated after the p= atch=20 is applied. =A0Also I couldn't get diff to include the new files I had to= =20 create, so I've sent ffi-sbcl.lisp in a separate email. =A0It needs to go= in the=20 src directory. (Aside: =A0is "cvs diff -c -N > ../patchfile" the best way to make a patc= h?) Apologies for posting this to the users group, I am having trouble subscr= ibing=20 to matlisp-devel.... --Robbie |
From: <ma...@ll...> - 2003-10-28 18:16:00
|
> Sorry! No one had said anything, but perhaps I didn't give enough > time. I may have not been following the thread carefully enough. I didn't pickup on the :M proposed change. > We could do what Kent Pitman suggested on c.l.l a while back and > change the matlisp package to be net.sourceforge.matlisp or > net.sf.matlisp. But I think matlisp is sufficiently unique that we > don't have to do that. It seems to me that :MATLISP is sufficient. > Would it be acceptable to you if you just added > > (rename-package "MATLISP" "MATLISP" "M") this works fine for me. tnx mike |
From: <ma...@ll...> - 2003-10-28 16:08:23
|
Ray, > Ok, :matrix and :m are no longer nicknames for the matlisp package. Ouch! The removal of :M breaks 99% of my code. If this is necessary for the rest of the users, I'll make changes to my code (the script to update my code should be pretty simple). Else, I'd just as soon :M stay around. tnx mike ************************************************** Dr Michael A. Koerber Micro$oft Free Zone MIT/Lincoln Laboratory |
From: Raymond T. <to...@rt...> - 2003-10-28 16:06:55
|
>>>>> "Michael" == mak <ma...@ll...> writes: Michael> Ray, >> Ok, :matrix and :m are no longer nicknames for the matlisp package. Michael> Ouch! The removal of :M breaks 99% of my code. If this is necessary for Michael> the rest of the users, I'll make changes to my code (the script to update Michael> my code should be pretty simple). Else, I'd just as soon :M stay around. Sorry! No one had said anything, but perhaps I didn't give enough time. I'm not sure what to do. I think it's bad that matlisp defines such a short package nickname, along with the rather generic :matrix nickname. We could do what Kent Pitman suggested on c.l.l a while back and change the matlisp package to be net.sourceforge.matlisp or net.sf.matlisp. But I think matlisp is sufficiently unique that we don't have to do that. Would it be acceptable to you if you just added (rename-package "MATLISP" "MATLISP" "M") in some appropriate startup file of yours? This will re-establish the M nickname. Ray |
From: Raymond T. <to...@rt...> - 2003-10-28 13:57:39
|
>>>>> "Michael" == mak <ma...@ll...> writes: >> This is acceptable to me. However, I think we need to think about >> adding a script or something for the user to run to do the necessary >> magic to get him into matlisp. We may also want to create a >> matlisp.core, which I don't think currently happens. But I tend not >> to save cores and instead load up the individual fasls. But matlisp >> is big so loading these is a bit slow.... Michael> Saving the Matlisp core (as well as the sample Matlisp startup script) Michael> is, IMHO, required at least as a user selectable option. If changes Michael> are to be made to the make file, having a "make intall" would be Michael> nice. FWIW, I have this working in my personal tree. "make" will just make the matlisp.core. I think it should port to ACL, but I don't have ACL to test it, so someone else will have to do that. I don't yet have install done, but that should be straight-forward to do. I'll probably check this in some time soon. Ray |
From: Raymond T. <to...@rt...> - 2003-10-28 13:57:16
|
>>>>> "Raymond" == Raymond Toy <to...@rt...> writes: >>>>> "rif" == rif <ri...@mi...> writes: rif> 1. I support Nicolas' suggestion that it would be preferable if REAL rif> were not exported from matlisp. (Actually, I also wish that rif> :matrix wasn't a nickname for matlisp, because I need to also have rif> my own matrix stuff (for sparse matrices, for example), but that's rif> another story.) Raymond> Yeah, :matrix is probably not a good nickname. I'll change this. I Raymond> hope it doesn't break peoples' code. Ok, :matrix and :m are no longer nicknames for the matlisp package. Ray |
From: Raymond T. <to...@rt...> - 2003-10-17 14:48:55
|
>>>>> "rif" == rif <ri...@mi...> writes: rif> 1. I support Nicolas' suggestion that it would be preferable if REAL rif> were not exported from matlisp. (Actually, I also wish that rif> :matrix wasn't a nickname for matlisp, because I need to also have rif> my own matrix stuff (for sparse matrices, for example), but that's rif> another story.) Yeah, :matrix is probably not a good nickname. I'll change this. I hope it doesn't break peoples' code. rif> 2. Is there a reason that man (and therefore help) iterates over all rif> symbols in a package, rather than just external ones? I find this rif> confusing, because I'll type (help matlisp) and see a function in rif> the list, but that function's not exported. Good question. Don't know. I think it should be external ones. You can find internal ones via apropos, so no loss there. Ray |
From: rif <ri...@MI...> - 2003-10-17 14:38:18
|
1. I support Nicolas' suggestion that it would be preferable if REAL were not exported from matlisp. (Actually, I also wish that :matrix wasn't a nickname for matlisp, because I need to also have my own matrix stuff (for sparse matrices, for example), but that's another story.) 2. Is there a reason that man (and therefore help) iterates over all symbols in a package, rather than just external ones? I find this confusing, because I'll type (help matlisp) and see a function in the list, but that function's not exported. Maybe I should be sending this to matlisp-devel instead. I'm not sure. Cheers, rif |
From: <ma...@ll...> - 2003-10-15 19:30:27
|
> I think M+ and friends are fine because they're methods that take 2 > args. If matlisp:+ were used instead, it would break peoples' > expectation of being able to give multiple arguments. > > Adding such a macro to matlisp would be fine with me, but then we'd > get more complaints about this in addition to REAL. :-) I agree that M+ etc are fine. Although I frequently need a multiple-argument and multiple-type version. e.g., (+ MATLISP-REAL-MATRIX ARRAY-COMPLEX-2 SCALAR ARRAY-SINGLE-FLOAT-2) This might be a future enhancement for MATLISP. I currently define H+, H-, H*, H/ etc as Hadamard varients of each operation and keep adding methods as I need them for my work. mike |
From: Raymond T. <to...@rt...> - 2003-10-15 18:49:33
|
>>>>> "Marco" == Marco Antoniotti <ma...@cs...> writes: Marco> I'd go even further than that. I think that M+, M* etc etc have no Marco> business in being exported/defined the way they are. MATLISP:+, Marco> MATLISP:* etc etc are what you want. I think M+ and friends are fine because they're methods that take 2 args. If matlisp:+ were used instead, it would break peoples' expectation of being able to give multiple arguments. Adding such a macro to matlisp would be fine with me, but then we'd get more complaints about this in addition to REAL. :-) Ray |
From: Marco A. <ma...@cs...> - 2003-10-15 18:01:57
|
On Wednesday, Oct 15, 2003, at 13:39 America/New_York, Marco Antoniotti wrote: > > On Wednesday, Oct 15, 2003, at 02:56 America/New_York, Nicolas Neuss > wrote: > >> Marco Antoniotti <ma...@cs...> writes: >> >>>> Users, hear! >>>> I vote for MREALPART. >>> >>> I vote for shadowing the symbols and use CL:REAL and CL:REALPART when >>> needed. >>> >>> IMHO that is TRTTD. >>> >>> Cheers >> >> I do use none of CL:REAL, CL:REALPART or MATLISP:REAL. The problem >> is that >> my (and your) packages cannot use both COMMON-LISP and MATLISP without >> taking care of this conflict. > > The conflict arises only when you :USE both CL and MATLISP. That is, > AFAIU exactly what is supposed to happen. > > Again, IMHO you need to redesign your package carefully in order to > achieve the desired overloading effect. > > I'd go even further than that. I think that M+, M* etc etc have no > business in being exported/defined the way they are. MATLISP:+, > MATLISP:* etc etc are what you want. > > If you need a package with the characteristics you want you do > > (defpackage "FOO" (:use "MATLISP" "CL") > (:shadow "REALPART") > (:export "REALPART")) > > (in-package "FOO") > > (defmethod REAL ((x cl:complex)) (cl:realpart x)) > > (defmethod REAL ((x matlisp:matrix)) (matlisp:realpart x)) That should have been (defmethod REALPART ((x cl:complex)) (cl:realpart x)) of course. Marco -- Marco Antoniotti NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488 715 Broadway 10th FL fax. +1 - 212 - 998 3484 New York, NY, 10003, U.S.A. |
From: Marco A. <ma...@cs...> - 2003-10-15 17:39:54
|
On Wednesday, Oct 15, 2003, at 02:56 America/New_York, Nicolas Neuss wrote: > Marco Antoniotti <ma...@cs...> writes: > >>> Users, hear! >>> I vote for MREALPART. >> >> I vote for shadowing the symbols and use CL:REAL and CL:REALPART when >> needed. >> >> IMHO that is TRTTD. >> >> Cheers > > I do use none of CL:REAL, CL:REALPART or MATLISP:REAL. The problem is > that > my (and your) packages cannot use both COMMON-LISP and MATLISP without > taking care of this conflict. The conflict arises only when you :USE both CL and MATLISP. That is, AFAIU exactly what is supposed to happen. Again, IMHO you need to redesign your package carefully in order to achieve the desired overloading effect. I'd go even further than that. I think that M+, M* etc etc have no business in being exported/defined the way they are. MATLISP:+, MATLISP:* etc etc are what you want. If you need a package with the characteristics you want you do (defpackage "FOO" (:use "MATLISP" "CL") (:shadow "REALPART") (:export "REALPART")) (in-package "FOO") (defmethod REAL ((x cl:complex)) (cl:realpart x)) (defmethod REAL ((x matlisp:matrix)) (matlisp:realpart x)) Cheers Marco -- Marco Antoniotti NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488 715 Broadway 10th FL fax. +1 - 212 - 998 3484 New York, NY, 10003, U.S.A. |