You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(9) 
_{May}
(2) 
_{Jun}
(2) 
_{Jul}
(3) 
_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}


2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(2) 
_{Jun}

_{Jul}
(8) 
_{Aug}

_{Sep}

_{Oct}
(5) 
_{Nov}

_{Dec}

2002 
_{Jan}
(2) 
_{Feb}
(7) 
_{Mar}
(14) 
_{Apr}

_{May}

_{Jun}
(16) 
_{Jul}
(7) 
_{Aug}
(5) 
_{Sep}
(28) 
_{Oct}
(9) 
_{Nov}
(26) 
_{Dec}
(3) 
2003 
_{Jan}

_{Feb}
(6) 
_{Mar}
(4) 
_{Apr}
(16) 
_{May}

_{Jun}
(8) 
_{Jul}
(1) 
_{Aug}
(2) 
_{Sep}
(2) 
_{Oct}
(33) 
_{Nov}
(13) 
_{Dec}

2004 
_{Jan}
(2) 
_{Feb}
(16) 
_{Mar}

_{Apr}
(2) 
_{May}
(35) 
_{Jun}
(8) 
_{Jul}

_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}
(8) 
_{Dec}
(21) 
2005 
_{Jan}
(7) 
_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(8) 
_{Jun}
(4) 
_{Jul}
(5) 
_{Aug}
(18) 
_{Sep}
(2) 
_{Oct}

_{Nov}
(3) 
_{Dec}
(31) 
2006 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(3) 
_{May}
(1) 
_{Jun}
(7) 
_{Jul}

_{Aug}
(2) 
_{Sep}
(3) 
_{Oct}

_{Nov}
(1) 
_{Dec}

2007 
_{Jan}

_{Feb}

_{Mar}
(2) 
_{Apr}
(11) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(2) 
2008 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(10) 
2009 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(1) 
_{Jun}

_{Jul}
(2) 
_{Aug}
(1) 
_{Sep}

_{Oct}
(1) 
_{Nov}

_{Dec}
(1) 
2011 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(5) 
_{Jun}
(1) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2012 
_{Jan}
(1) 
_{Feb}
(1) 
_{Mar}

_{Apr}
(1) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2013 
_{Jan}

_{Feb}

_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(1) 
_{Dec}
(1) 
2014 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}

_{May}
(2) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(6) 
2
(1) 
3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28



From: Jefferson Provost <jp@cs...>  20020202 01:26:37

Tunc, Thanks for the help and the code. I'll look at it and see what I can do and let you all know. I may post with some questions on optimization, since I haven't really tried to optimize my code much before. Re: "covariance normalization": I don't know if there's an official name for it, but basically it's an operation on a square matrix A to produce a new matrix B, where B(x,y) = A(x,y)/sqrt(A(x,x)*A(y,y)). If A was a covariance matrix, B will be the corresponding correlation matrix, with 1's on the diagonal and all the other values in the range [1,1]. (i.e. B(x,y) is the correlation between variable x and variable y.) I have implemented it using MATLISP, but it's not optimized at all... I see how to optimize it now, I think, so hopefully it should speed up a lot. Thanks! Jeff On 2/1/02 4:55 PM, "Tunc Simsek" <simsek@...> wrote: > Hi Ray, Jefferson; > > I've played with Allegro 6.0 as follows (I asked it to explain > the compilation of m.*!): > > (defmethod m.*! ((a realmatrix) (b realmatrix)) > (let* ((nxm (numberofelements b))) > (declare (type fixnum nxm) > (optimize (speed 3) (safety 0))) > > (dotimes (k nxm b) > (declare (type fixnum k)) > (let ((aval (matrixref a k)) > (bval (matrixref b k))) > (declare (type realmatrixelementtype aval bval) > (:explain :calls :types :boxing)) > (setf (matrixref b k) (* aval bval)))))) > > > and here is what I get: > > ;;; Compiling file C:\usr\gigascale\shift\src\matlisp\src\mtimes.lisp > ;Examining a (possibly unboxed) call to *_2OP with arguments: > ; symeval AVAL type (DOUBLEFLOAT * *) > ; symeval BVAL type (DOUBLEFLOAT * *) > ; which returns a value of type (DOUBLEFLOAT * *) > ;Generating a DOUBLEFLOAT box > ;Examining a call to FUNCALL with arguments: > ; call to CDR type KNOWNFUNCTION > ; symeval G16043 type (DOUBLEFLOAT * *) > ; symeval G16044 type #<STANDARDCLASS REALMATRIX> > ; symeval G16045 type (INTEGER 536870912 536870911) > ; which returns a value of type T > ;Examining a call to CDR with arguments: > ; constant ((EVALWHENLOADED) SETFMETHODLOCATIVE (QUOTE MATRIXREF) > ...) type TRUELIST > ; which returns a value of type KNOWNFUNCTION > ;;; Writing fasl file C:\usr\gigascale\shift\src\matlisp\bin\mtimes.fasl > > so my conclusion is that the MATRIXREF used to access the variables is > waaaaay more expensive than AREF. Insomuch that it has the overhead of > a generic function dispatch and also a boxing of the results. > This will explain the performance figures given by Jefferson, which > I was able to duplicate (roughly on the same order) on my Windows 2000 > pentium something machine. > > Now I quickly tried the following change and got a performance increase > of exactly 10 fold (ie. from 1.7 seconds per call to .13 seconds) > and the compiler is still saying that there is boxing. I don't know > the Allegro flags very well, but it seems that the matlisp sources > can be modified so that this performance problem disappears > and there is no need to do m.*! in fortran. (disclaimer: I DIDN'T > TEST IF THE FOLLOWING MODIFICATIONS GIVE CORRECT COMPUTATIONAL > RESULTS OR ARE CONSISTENT WITH MATLISP CODING STANDARDS) > > (defmethod m.*! ((a realmatrix) (b realmatrix)) > (let* ((nxm (numberofelements b)) > (aa (store a)) > (bb (store b))) > (declare (type fixnum nxm) > (type (realmatrixstoretype (*)) aa bb) > (optimize (speed 3) (safety 0))) > > (dotimes (k nxm b) > (declare (type fixnum k)) > (let ((aval (aref aa k)) > (bval (aref bb k))) > (declare (type realmatrixelementtype aval bval) > (:explain :calls :types :boxing)) > (setf (aref bb k) (* aval bval)))))) > > Good luck and keep this list posted on your results, > I think a tested version of these changes should be checked in > to the repository. > >  Tunc > > ps. I don't know your terminology on covariance normalization. > But it seems to me that it should be possible using matlisp functions. > > Jefferson Provost wrote: >> >> On 2/1/02 11:34 AM, "Raymond Toy" <toy@...> wrote: >> >>>>>>>> "Jefferson" == Jefferson Provost <jp@...> writes: >>> >>> You are right. A peek at the generated code on Solaris seems to show >>> that we are calling out to generic functions way too often and could >>> be vastly optimized. >>> >>> I know with CMUCL, a Lisp version of m+ was almost as fast (< 5% >>> slower?) than Fortran, so m.* should be as fast, even in Lisp. >>> >>> I'll look in to it soon. >> >> Thanks, though I'd be happy to just write the thing in fortran myself, but >> I'm totally unfamiliar with the foreign function interfaces for Allegro and >> CMUCL. >> >> Actually, I have another routine that I use a lot which I was hoping might >> be in MATLISP, but which I couldn't find. I've had to write it myself (in >> lisp) and it's slower than I'd like it to be. It's basically the >> "covariance normalization" i.e. the normalization that turns a covariance >> matrix into a correlation coefficients matrix. >> >> J. >> >> _______________________________________________ >> Matlispusers mailing list >> Matlispusers@... >> https://lists.sourceforge.net/lists/listinfo/matlispusers > > _______________________________________________ > Matlispusers mailing list > Matlispusers@... > https://lists.sourceforge.net/lists/listinfo/matlispusers > 
From: Tunc Simsek <simsek@ee...>  20020201 22:56:01

Hi Ray, Jefferson; I've played with Allegro 6.0 as follows (I asked it to explain the compilation of m.*!): (defmethod m.*! ((a realmatrix) (b realmatrix)) (let* ((nxm (numberofelements b))) (declare (type fixnum nxm) (optimize (speed 3) (safety 0))) (dotimes (k nxm b) (declare (type fixnum k)) (let ((aval (matrixref a k)) (bval (matrixref b k))) (declare (type realmatrixelementtype aval bval) (:explain :calls :types :boxing)) (setf (matrixref b k) (* aval bval)))))) and here is what I get: ;;; Compiling file C:\usr\gigascale\shift\src\matlisp\src\mtimes.lisp ;Examining a (possibly unboxed) call to *_2OP with arguments: ; symeval AVAL type (DOUBLEFLOAT * *) ; symeval BVAL type (DOUBLEFLOAT * *) ; which returns a value of type (DOUBLEFLOAT * *) ;Generating a DOUBLEFLOAT box ;Examining a call to FUNCALL with arguments: ; call to CDR type KNOWNFUNCTION ; symeval G16043 type (DOUBLEFLOAT * *) ; symeval G16044 type #<STANDARDCLASS REALMATRIX> ; symeval G16045 type (INTEGER 536870912 536870911) ; which returns a value of type T ;Examining a call to CDR with arguments: ; constant ((EVALWHENLOADED) SETFMETHODLOCATIVE (QUOTE MATRIXREF) ...) type TRUELIST ; which returns a value of type KNOWNFUNCTION ;;; Writing fasl file C:\usr\gigascale\shift\src\matlisp\bin\mtimes.fasl so my conclusion is that the MATRIXREF used to access the variables is waaaaay more expensive than AREF. Insomuch that it has the overhead of a generic function dispatch and also a boxing of the results. This will explain the performance figures given by Jefferson, which I was able to duplicate (roughly on the same order) on my Windows 2000 pentium something machine. Now I quickly tried the following change and got a performance increase of exactly 10 fold (ie. from 1.7 seconds per call to .13 seconds) and the compiler is still saying that there is boxing. I don't know the Allegro flags very well, but it seems that the matlisp sources can be modified so that this performance problem disappears and there is no need to do m.*! in fortran. (disclaimer: I DIDN'T TEST IF THE FOLLOWING MODIFICATIONS GIVE CORRECT COMPUTATIONAL RESULTS OR ARE CONSISTENT WITH MATLISP CODING STANDARDS) (defmethod m.*! ((a realmatrix) (b realmatrix)) (let* ((nxm (numberofelements b)) (aa (store a)) (bb (store b))) (declare (type fixnum nxm) (type (realmatrixstoretype (*)) aa bb) (optimize (speed 3) (safety 0))) (dotimes (k nxm b) (declare (type fixnum k)) (let ((aval (aref aa k)) (bval (aref bb k))) (declare (type realmatrixelementtype aval bval) (:explain :calls :types :boxing)) (setf (aref bb k) (* aval bval)))))) Good luck and keep this list posted on your results, I think a tested version of these changes should be checked in to the repository.  Tunc ps. I don't know your terminology on covariance normalization. But it seems to me that it should be possible using matlisp functions. Jefferson Provost wrote: > > On 2/1/02 11:34 AM, "Raymond Toy" <toy@...> wrote: > > >>>>>> "Jefferson" == Jefferson Provost <jp@...> writes: > > > > You are right. A peek at the generated code on Solaris seems to show > > that we are calling out to generic functions way too often and could > > be vastly optimized. > > > > I know with CMUCL, a Lisp version of m+ was almost as fast (< 5% > > slower?) than Fortran, so m.* should be as fast, even in Lisp. > > > > I'll look in to it soon. > > Thanks, though I'd be happy to just write the thing in fortran myself, but > I'm totally unfamiliar with the foreign function interfaces for Allegro and > CMUCL. > > Actually, I have another routine that I use a lot which I was hoping might > be in MATLISP, but which I couldn't find. I've had to write it myself (in > lisp) and it's slower than I'd like it to be. It's basically the > "covariance normalization" i.e. the normalization that turns a covariance > matrix into a correlation coefficients matrix. > > J. > > _______________________________________________ > Matlispusers mailing list > Matlispusers@... > https://lists.sourceforge.net/lists/listinfo/matlispusers 
From: Jefferson Provost <jp@cs...>  20020201 18:29:24

On 2/1/02 11:34 AM, "Raymond Toy" <toy@...> wrote: >>>>>> "Jefferson" == Jefferson Provost <jp@...> writes: > > You are right. A peek at the generated code on Solaris seems to show > that we are calling out to generic functions way too often and could > be vastly optimized. > > I know with CMUCL, a Lisp version of m+ was almost as fast (< 5% > slower?) than Fortran, so m.* should be as fast, even in Lisp. > > I'll look in to it soon. Thanks, though I'd be happy to just write the thing in fortran myself, but I'm totally unfamiliar with the foreign function interfaces for Allegro and CMUCL. Actually, I have another routine that I use a lot which I was hoping might be in MATLISP, but which I couldn't find. I've had to write it myself (in lisp) and it's slower than I'd like it to be. It's basically the "covariance normalization" i.e. the normalization that turns a covariance matrix into a correlation coefficients matrix. J. 
From: Raymond Toy <toy@rt...>  20020201 17:34:18

>>>>> "Jefferson" == Jefferson Provost <jp@...> writes: Jefferson> On 2/1/02 7:28 AM, "Raymond Toy" <toy@...> wrote: Jefferson> D = 1x500 real matrix >> >> You do realize that a*b is O(500^3), a+b is O(500^2) and c*d is also >> O(500^2). So it's not surprising that a*b takes 157 times longer than >> a+b and 88 times longer than c*d. Jefferson> Yes, I realize that. But I wasn't doing a*b, I was doing a.*b, which is Jefferson> O(500x500), same as a+b, and c*d. Sorry, I missed the dot! You are right. A peek at the generated code on Solaris seems to show that we are calling out to generic functions way too often and could be vastly optimized. I know with CMUCL, a Lisp version of m+ was almost as fast (< 5% slower?) than Fortran, so m.* should be as fast, even in Lisp. I'll look in to it soon. Ray 
From: Jefferson Provost <jp@cs...>  20020201 17:25:07

On 2/1/02 7:28 AM, "Raymond Toy" <toy@...> wrote: > Jefferson> D = 1x500 real matrix > > You do realize that a*b is O(500^3), a+b is O(500^2) and c*d is also > O(500^2). So it's not surprising that a*b takes 157 times longer than > a+b and 88 times longer than c*d. Yes, I realize that. But I wasn't doing a*b, I was doing a.*b, which is O(500x500), same as a+b, and c*d. Looking in mtimes.lisp, m* is implemented by calling GEMM, which is implemented in fortran, but m.* and m.*! are written in lisp. It doesn't surprise me that the routines written in LISP are much slower. I was just surprised to find that function written in lisp. I have to problem writing in myself, but I wanted to make sure I wasn't missing something first. J. 
From: Raymond Toy <toy@rt...>  20020201 13:28:25

>>>>> "Jefferson" == Jefferson Provost <jp@...> writes: Jefferson> On 1/31/02 5:40 PM, "Tunc Simsek" <simsek@...> wrote: >> what do you mean by native. both allegro and cmucl produce native code. >> looking at mtimes.lisp; it looks like all the declarations are in place >> for m.*! so the only reason that it may be slow is if the optimization >> flags were not set during compilation. I personally never checked that >> they are. It is worth checking this. Jefferson> Sorry, I misspoke: by native I meant to say implemented in fortran. Jefferson> Here are the average times of a few operations. Jefferson> operation  msecs (mean of 100 runs) Jefferson> + Jefferson> (m.*! a b)  1351.6 Jefferson> (m.+! a b)  8.6 Jefferson> (m* c d)  15.2 Jefferson> A,B = 500x500 realmatrices Jefferson> C = 500x1 real matrix Jefferson> D = 1x500 real matrix You do realize that a*b is O(500^3), a+b is O(500^2) and c*d is also O(500^2). So it's not surprising that a*b takes 157 times longer than a+b and 88 times longer than c*d. To speed things up, you may want to get a copy of ATLAS and build matlisp with that if you haven't already. Ray 
From: Jefferson Provost <jp@cs...>  20020201 00:29:35

On 1/31/02 5:40 PM, "Tunc Simsek" <simsek@...> wrote: > what do you mean by native. both allegro and cmucl produce native code. > looking at mtimes.lisp; it looks like all the declarations are in place > for m.*! so the only reason that it may be slow is if the optimization > flags were not set during compilation. I personally never checked that > they are. It is worth checking this. Sorry, I misspoke: by native I meant to say implemented in fortran. Here are the average times of a few operations. operation  msecs (mean of 100 runs) + (m.*! a b)  1351.6 (m.+! a b)  8.6 (m* c d)  15.2 A,B = 500x500 realmatrices C = 500x1 real matrix D = 1x500 real matrix (Machine: 1.2GHz Athlon w/ 512M RAM running Allegro 6.0 on Debian Linux) Jeff 