You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(9) 
_{May}
(2) 
_{Jun}
(2) 
_{Jul}
(3) 
_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}


2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(2) 
_{Jun}

_{Jul}
(8) 
_{Aug}

_{Sep}

_{Oct}
(5) 
_{Nov}

_{Dec}

2002 
_{Jan}
(2) 
_{Feb}
(7) 
_{Mar}
(14) 
_{Apr}

_{May}

_{Jun}
(16) 
_{Jul}
(7) 
_{Aug}
(5) 
_{Sep}
(28) 
_{Oct}
(9) 
_{Nov}
(26) 
_{Dec}
(3) 
2003 
_{Jan}

_{Feb}
(6) 
_{Mar}
(4) 
_{Apr}
(16) 
_{May}

_{Jun}
(8) 
_{Jul}
(1) 
_{Aug}
(2) 
_{Sep}
(2) 
_{Oct}
(33) 
_{Nov}
(13) 
_{Dec}

2004 
_{Jan}
(2) 
_{Feb}
(16) 
_{Mar}

_{Apr}
(2) 
_{May}
(35) 
_{Jun}
(8) 
_{Jul}

_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}
(8) 
_{Dec}
(21) 
2005 
_{Jan}
(7) 
_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(8) 
_{Jun}
(4) 
_{Jul}
(5) 
_{Aug}
(18) 
_{Sep}
(2) 
_{Oct}

_{Nov}
(3) 
_{Dec}
(31) 
2006 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(3) 
_{May}
(1) 
_{Jun}
(7) 
_{Jul}

_{Aug}
(2) 
_{Sep}
(3) 
_{Oct}

_{Nov}
(1) 
_{Dec}

2007 
_{Jan}

_{Feb}

_{Mar}
(2) 
_{Apr}
(11) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(2) 
2008 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(10) 
2009 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(1) 
_{Jun}

_{Jul}
(2) 
_{Aug}
(1) 
_{Sep}

_{Oct}
(1) 
_{Nov}

_{Dec}
(1) 
2011 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(5) 
_{Jun}
(1) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2012 
_{Jan}
(1) 
_{Feb}
(1) 
_{Mar}

_{Apr}
(1) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2013 
_{Jan}

_{Feb}

_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(1) 
_{Dec}
(1) 
2014 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}

_{May}
(2) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(9) 
2

3

4

5
(3) 
6
(1) 
7

8

9

10

11

12

13

14

15

16

17

18

19
(1) 
20

21
(1) 
22
(7) 
23

24
(1) 
25

26
(2) 
27

28

29
(1) 
30

From: <mak@ll...>  20021129 17:26:53

All, 1. Below are proposed changes to add pseudoinverse, PINV, to matlisp. 2. I've tested this on some random real and complex matrix and compared the results to OCTAVE including rank deficient matricies (they match). 3. My intentions are to replace PINV in the next few weeks with direct calls to DGELSD.F/ZGELSD.F. I intend _no_ change to the calling or return arguments. 4. I'd specifically like feedback on the inclusion of the LEASTSQUARES option; MTHD. I almost removed this block of code from PINV several times as being too trivial and adding, otherwise, unnecessary code. Its in the below code only because I posted a similar routine last week. 5. Ray, Tunc: If there are no objections, or agreed to changes I need to make, by the end of the week of 12/2/02, could you enter the below changes in CVS? tnx mike ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; CHANGES TO SYSTEM.DCL ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; diff c ../matlisp system.dcl *** ../matlisp/system.dcl Tue Oct 8 19:30:39 2002  system.dcl Fri Nov 29 11:39:53 2002 *************** *** 218,224 **** "mdivide" "msqrt" #:mswindows "fft" ! "geqr")) (:module "specialfunctions" :sourcepathname "matlisp:src" :binarypathname ""  218,225  "mdivide" "msqrt" #:mswindows "fft" ! "geqr" ! "pinv")) (:module "specialfunctions" :sourcepathname "matlisp:src" :binarypathname "" ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; CHANGES TO PACKAGES.LISP ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; diff c ../matlisp packages.lisp *** ../matlisp/packages.lisp Tue Oct 8 19:30:39 2002  packages.lisp Fri Nov 29 11:36:07 2002 *************** *** 285,290 ****  285,291  "NUMBEROFROWS" "ONES" "PRINTELEMENT" + "PINV" "QR" "QR!" "GEQR!" ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; NEW FILE FOR matlisp/src/pinv.lisp ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (inpackage "MATLISP") ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; (defun pinv (a &optional (b nil) (tol nil) (mthd :svd)) " SYNTAX ====== (PINV A [B TOL MTHD]) INPUT  A A Matlisp matrix, M x N B Optional. Default NIL. If provided, return solution to A X = B. TOL Optional. Default NIL. Set limit on range of singular values used in computation of pseudoinverse. MTHD Optional. Default :SVD. Indicates the type of computation to use for the pseudoinverse. :LS Force least squares interpretation :SVD Force SVD based computation (default) OUTPUT (VALUES PINV RANK SIGMA)  PINV Pseudoinverse or solution to A X = B, a Matlisp matrix. RANK The effective rank of A, r = rank(A). NIL if MTHD is :LS SIGMA The list of lenght min(M,N) of the signular values of A. NIL if MTHD is :LS DESCRIPTION =========== Given the equation A X = B, solve for the value of A^+ which will solve X = A^+ B. A is assumed M x N. X is N x L and B M x L. The solution is computed via a signular value decomposition, SVD. Let A = U S V^H define S^+ = diag(1/s1, 1/s2, ... 1/sr, 0 ... 0) where r is rank(A) Then the pinv, A^+ = V S^+ U^H. The computuation of r = rank(A) is controlled by TOL. If TOL is NIL, or TOL < 0, or TOL > 1, TOL is replaced with DOUBLEFLOATEPSILON. All signular values <= TOL*S(1) are considered to be equal to zero. (Note: singular values have a descending order S(1) >= S(2) ...S(r) > 0.) If A is M x M, and full rank, (M:M/ A B) can be used. Use MTHD :SVD when rank(A) = r < N. For r <= M < N, this is the underconstrained case for which A^+ is the minimum norm solution. The MTHD :LS can be used when M > N and rank(A) = N. This is the canonical least squares problem. In this case A^+ = inv(A^H A) A^H. NOTE: NO RANK CHECKS OR SUCH ARE MADE HERE. IF A^H A IS RANK DEFICIENT THIS METHOD MAY FAIL. USER BEWARE." ;; reset the TOL value as required. This odd screening ;; is for future compatibility to the ?GELSD.F routines (if (or (null tol) (>= tol 1) (< tol 0)) (setf tol doublefloatepsilon)) (cond ;; Force the least squares interpretation. ((eq mthd :ls) (let* ((ah (ctranspose a)) (pinv (m/ (m* ah a) ah))) (values (if b (m* pinv b) pinv) nil nil))) ;; This is the major routine here, using the SVD method ((eq mthd :svd) (multiplevaluebind(up sp vptranspose info) (svd a :s) ; use economy SVD (if (null info) (error "Computation of SVD failed.")) ;; This form sets all signular values above the rank ;; of A to zero (let ((pinv nil) (spvalues (m::store (diag sp))) (rank ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; This form computes the reciprocal of the singular values ;; up to and including the rank, r, and returns the RANK (do ((limit (* tol (matrixref sp 0 0))) (maxrank (min (numberofrows sp) (numberofcols sp))) (n 0 (incf n))) ;; terminate when all sigular values are used, ;; or fall below the limit ((or (= n maxrank) (<= (matrixref sp n n) limit)) n) (setf (matrixref sp n n) (/ (matrixref sp n n)))))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; set the values of the reciprocal singular values to zero ;; above the RANK of A (do ((n rank (incf n)) (maxrank (min (numberofrows sp) (numberofcols sp)))) ((= n maxrank)) (setf (matrixref sp n n) 0d0)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; compute the PINV or PINV * B (setf pinv (m* (ctranspose vptranspose) (m* sp (ctranspose up)))) (values (if b (m* pinv b) pinv) rank spvalues)))) (t (error "Invalid Method (~A) passed to PINV." mthd)))) 
From: Michael A. Koerber <mak@ll...>  20021126 13:37:35

Tunc, > Hi Mike. This sounds good. Will the function have to > work for A,B complex (or A real, B complex and so forth). Yes...all of those cases will be covered. mike ************************************************** Dr Michael A. Koerber Micro$oft Free Zone MIT/Lincoln Laboratory 
From: Tunc Simsek <simsek@ee...>  20021126 02:14:47

Hi Mike. This sounds good. Will the function have to work for A,B complex (or A real, B complex and so forth). Thanks, Tunc mak%koerber2@... wrote: > > Tunc, > > 1. Over the weekend I spent some more time on the PINV question. In > response to your question about DGELSD, the more I thought about it > the more I liked it. Since the routine accepts multiple RHS, I > suppose it can be provided an identity matrix. The result should then > be the PINV matrix. If I write something like this > > (DEFUN PINV (A &OPTIONAL (B NIL) (...)) > ... > (LET ((B (IF (NULL B) IDENTITYFORA B))) > ... > (CALLINGGELSD ...) ...)) > > then the user can say (PINV A) to get the psuedoinverse, > or they can say (PINV A B) to solve the multiple RHS problem. > > 2. The use of ?GELSD should aviod (my) concerns over differences in > D1MACH values in a given CL implimentation and the associated FORTRAN. > (BTW I found that the machine values are controlled by DLAMCH.F for > LAPACK, not D1MACH.F as I thought.) > > 3. Due to current time constraints, I intend to: > > a. change the calling convention of PINV I posted last week to > provide this capability. The computations remain essentially > unchanged. > > b. over the next few weeks, develop the ?GELSD.F wrappers and > replace the internals of PINV to use these. > > any thoughts or redirections concerning this? > > mike > >  > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Matlispusers mailing list > Matlispusers@... > https://lists.sourceforge.net/lists/listinfo/matlispusers 
From: mak%<koerber2@ll...>  20021124 13:50:29

Tunc, 1. Over the weekend I spent some more time on the PINV question. In response to your question about DGELSD, the more I thought about it the more I liked it. Since the routine accepts multiple RHS, I suppose it can be provided an identity matrix. The result should then be the PINV matrix. If I write something like this (DEFUN PINV (A &OPTIONAL (B NIL) (...)) ... (LET ((B (IF (NULL B) IDENTITYFORA B))) ... (CALLINGGELSD ...) ...)) then the user can say (PINV A) to get the psuedoinverse, or they can say (PINV A B) to solve the multiple RHS problem. 2. The use of ?GELSD should aviod (my) concerns over differences in D1MACH values in a given CL implimentation and the associated FORTRAN. (BTW I found that the machine values are controlled by DLAMCH.F for LAPACK, not D1MACH.F as I thought.) 3. Due to current time constraints, I intend to: a. change the calling convention of PINV I posted last week to provide this capability. The computations remain essentially unchanged. b. over the next few weeks, develop the ?GELSD.F wrappers and replace the internals of PINV to use these. any thoughts or redirections concerning this? mike 
From: Raymond Toy <toy@rt...>  20021122 22:53:50

>>>>> "Michael" == Michael A Koerber <mak@...> writes: Michael> I haven't studied this enough yet, but it appears that D1MACH.LISP is Michael> used via F2CL to provide MACH information to the FORTRAN routines in, Michael> e.g., TOMS. D1MACH draws its information from the CL implimentation Michael> limits. I don't know that this is the same as that produced by Michael> underlying FORTRAN compiler's treatement of D1MACH.F. Unless it Michael> is the same, the numerically accuracy of the routines that rely Michael> on it could be impacted. This is just a concern on my part, not Yes, this is true in general, but if LAPACK needs stuff from d1mach, it's going to get it from the Fortran d1mach, not the Lisp d1mach. There is a problem, however, because if there is such a thing for LAPACK, it probably hasn't been tweaked at all so whatever those the defaults are are getting used. Ray 
From: Michael A. Koerber <mak@ll...>  20021122 21:58:11

Tunc, > Can you say something about the difference between > your approach (using SVD) and the LAPACK routines called > DGELS and DGELSD. Numerical stability? DGELS will handle only full rank A, DGELSD will solve multiple right hand sides of Ax = [b1  b2  ... ] (each bi a column vector). An entity called PINV is not returned. If PINV is not required, then DGELSD might be a better choice. > Your question on MACH pinpoints a fundamental issue in Matlisp. > Almost all (if not all) Matlisp routines use the underlying > fortran implementation to do the work. Unless anything has > changed, the MACH is the one that fortran would use by default. > If this is not sufficient, or you need explicit access to this > number, we need to do some extra work (most likely in the > config file for matlisp). I haven't studied this enough yet, but it appears that D1MACH.LISP is used via F2CL to provide MACH information to the FORTRAN routines in, e.g., TOMS. D1MACH draws its information from the CL implimentation limits. I don't know that this is the same as that produced by underlying FORTRAN compiler's treatement of D1MACH.F. Unless it is the same, the numerically accuracy of the routines that rely on it could be impacted. This is just a concern on my part, not something that I have even looked at yet. Perhaps you or Ray can comment. mike 
From: Tunc Simsek <simsek@ee...>  20021122 18:28:10

Hi Michael, Thanks so much for PINV. It solves a basic problem, and I think it will make a useful addition to Matlisp. Just one more thing to think about. Can you say something about the difference between your approach (using SVD) and the LAPACK routines called DGELS and DGELSD. Numerical stability? Your question on MACH pinpoints a fundamental issue in Matlisp. Almost all (if not all) Matlisp routines use the underlying fortran implementation to do the work. Unless anything has changed, the MACH is the one that fortran would use by default. If this is not sufficient, or you need explicit access to this number, we need to do some extra work (most likely in the config file for matlisp). Best, Tunc "Michael A. Koerber" wrote: > > Nicolas, > > > I don't know about the other members of this mailing list, but what > > concerns me: I would like to see discussions about new addons on the > > matlispuser list. The traffic is not large here and it would be nice > > My initial post, to matlispusers@... is in compliance > with your request. The only thing more I could have done was to post > my thought process...not something that most want to see :). > > So...disucssion is now open...do Matlisp user want a PINV command added? > > ************************************************** > Dr Michael A. Koerber Micro$oft Free Zone > MIT/Lincoln Laboratory > >  > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Matlispusers mailing list > Matlispusers@... > https://lists.sourceforge.net/lists/listinfo/matlispusers 
From: Michael A. Koerber <mak@ll...>  20021122 16:45:47

Nicolas, > I don't know about the other members of this mailing list, but what > concerns me: I would like to see discussions about new addons on the > matlispuser list. The traffic is not large here and it would be nice My initial post, to matlispusers@... is in compliance with your request. The only thing more I could have done was to post my thought process...not something that most want to see :). So...disucssion is now open...do Matlisp user want a PINV command added? ************************************************** Dr Michael A. Koerber Micro$oft Free Zone MIT/Lincoln Laboratory 
From: Raymond Toy <toy@rt...>  20021122 15:10:18

>>>>> "Nicolas" == Nicolas Neuss <Nicolas.Neuss@...> writes: Nicolas> I don't know about the other members of this mailing list, but what Nicolas> concerns me: I would like to see discussions about new addons on the Nicolas> matlispuser list. The traffic is not large here and it would be nice Nicolas> for users to learn how matlisp is extended. If you want to know how to extend matlisp, please ask and ye shall receive. What do you want to discuss about addons? How to do them? Whether they should be added? Useful additions? Ray 
From: Nicolas Neuss <Nicolas.Neuss@iw...>  20021122 14:37:23

"Michael A. Koerber" <mak@...> writes: > Tunc, > > > When M<N, should the matrix have rank=M or does it > > not matter? Tunc, I don't know about the other members of this mailing list, but what concerns me: I would like to see discussions about new addons on the matlispuser list. The traffic is not large here and it would be nice for users to learn how matlisp is extended. Nicolas. 
From: Michael A. Koerber <mak@ll...>  20021122 13:36:39

Tunc, > When M<N, should the matrix have rank=M or does it > not matter? 1. Good question. The answer is, PINV as written will fail or produce a numerically unstable result due to taking the reciprocal of a very small singular value. 2. The solution is straight forward, let all singular values < MACHTOL equal zero. ==> rank(A) = r <= min(M,N). And use just those r elements to form PINV. 3. I have an all day meeting today, but will make these changes in the next couple of days. ....BTW what is the name of the "*MACH" in the matlisp core.... more later mike 
From: <mak@ll...>  20021121 14:26:54

Jeff, below is your PINV code with some minor adjustments. The need for the extra CTRANSPOSE was probably due to MATLISP returning V^H, not V as Octave does. Also note that CTRANSPOSE of S is not needed since the singular values are real. Ray, I added some comments to the PINV code, written by Jeff, and I'd like to suggest that it be added to MATLISP. tnx mike ************************************************** Dr Michael A. Koerber Micro$oft Free Zone MIT/Lincoln Laboratory (defun pinv (a &optional (flag :svd)) " SYNTAX ====== (PINV2 A :FLAG) INPUT  A A Matlisp matrix, M x N FLAG A key indicating the type of computation to use for the pseudoinverse. Default :SVD :LS Force least squares interpretation :SVD Force SVD based computation (default) OUTPUT  PINV Pseudoinverse, a Matlisp matrix. DESCRIPTION =========== Given the equation Ax = b, solve for the value of A^+ which will solve x = A^+ b. A is assumed M x N. If A is M x M, and full rank, use (M:M/ A B). Use FLAG :LS when M > N and rank(A) = N. This is the canonical least squares problem. In this case A^+ = inv(A^H A) A^H. Use FLAG :SVD when rank(A) = r < N. For M < N, this is the underconstrained case for which A^+ is the minium norm solution. To compute pinv, A^+, in general, let A have an SVD A = USV^H define S^+ = diag(1/s1, 1/s2, ... 1/sr, 0 ... 0) where r is rank(A) Then the pinv, A^+ = V S^+ U^H." (cond ((eq flag :ls) ;; force the least squares interpretation. (let ((ah (ctranspose a))) (m* (m/ (m* ah a)) ah))) ((eq flag :svd) ;; use svd method (multiplevaluebind(up sp vptranspose info) (svd a :s) ; use economy SVD (if (null info) (error "Computation of SVD failed.")) ;; form the reciprocal of the singular values (dotimes (n (min (numberofrows sp) (numberofcols sp))) (setf (matrixref sp n n) (/ (matrixref sp n n)))) ;; compute the pinv (m* (ctranspose vptranspose) (m* sp (ctranspose up))))) (t (error "Invalid Flag (~A) passed to PINV." flag)))) 
From: Jefferson Provost <jp@cs...>  20021119 07:35:19

Is there a pseudoinverse function in matlisp? It looks like there isn't, but maybe I missed it. How hard would it be to write? J. 
From: Raymond Toy <toy@rt...>  20021106 14:12:30

>>>>> "rif" == rif <rif@...> writes: rif> Basically, I'm reading many many 28 dimensional vectors from a file, rif> doing a bunch of operations on each 28 dimensional to turn it into a rif> ~4,500 dimensional vector, then keeping a running sum (eventually an rif> average when I'm done with the file and I know the count) of these rif> highdimensional vectors. My hope is to reuse only two rif> highdimensional locations, one for the "current" vector and one for rif> the sum, to avoid excessive consing. If I want to reuse a matlisp rif> matrix, this means many calls to mref. If the store were exposed, rif> then I could implement it directly as calls to aref. In this case, I would keep the highdimensional vector as a Lisp vector. When I'm done, I'd convert it to a Matlisp vector. But feel free to use store. It's very, very, unlikely to change. Ray 
From: rif <rif@MIT.EDU>  20021105 23:40:40

Basically, I'm reading many many 28 dimensional vectors from a file, doing a bunch of operations on each 28 dimensional to turn it into a ~4,500 dimensional vector, then keeping a running sum (eventually an average when I'm done with the file and I know the count) of these highdimensional vectors. My hope is to reuse only two highdimensional locations, one for the "current" vector and one for the sum, to avoid excessive consing. If I want to reuse a matlisp matrix, this means many calls to mref. If the store were exposed, then I could implement it directly as calls to aref. (If you're interested in the details, I'm basically nonlinearly mapping the original vector to a higher dimensional space where the dimensions are the products of up to three dimensions of the original vector. It's quite possible that the cost of this function dwarfs the cost of the consing, making my whole discussion premature optimization). Cheers, rif 
From: Raymond Toy <toy@rt...>  20021105 23:03:27

>>>>> "rif" == rif <rif@...> writes: [slow matrixref example snipped] rif> This implies to me that if I need to do a LOT of manipulating the rif> entries of vectors, I'm better off moving them to an array, doing the Can you describe what kinds of manipulations you want to do? rif> manipulation there, then moving them back into matrix form to do rif> matrix operations. Is this essentially correct, or am I missing rif> something? I guess the alternate approach is to extend matlisp myself Yes, this is correct. matrixref hides the implementation details. If we ever do packed storage, matrixref would hide that fact too. The intent, however, is that, if you can, use the LAPACK routines to do the manipulations, as you would in Matlab. rif> to provide "unsafe" accessors that assume more  I'm willing to tell rif> it I'm using a real matrix to avoid the generic function dispatch, and rif> I'm willing to just run an aref directly on the store component rif> (although the store isn't currently exported by matlisp). We can make store exported without any problems. :) Store is very unlikely to change since it's used everywhere to get at the underlying array. If you're operating on all of the elements of a vector, you may want to look at matrixmap. It will run your selected function over each element of the vector and uses store and aref to do it, so it's about as efficient as as doing it by hand. Ray 
From: rif <rif@MIT.EDU>  20021105 22:47:59

My informal experiments seems to indicate that matrixref is much slower than aref. In particular (not that this is suprising, since matrixref seems to be doing much more): * (setf r (makerealmatrix 1 100)) #<REALMATRIX 1 x 100 0.0000 0.0000 0.0000 0.0000 ... 0.0000 > * (time (dotimes (i 100000000) (setf (matrixref r (random 100)) (random 1.0d0)))) Evaluation took: 201.49 seconds of real time 164.01 seconds of user run time 30.88 seconds of system run time [Run times include 4.38 seconds GC run time] 0 page faults and 12799208008 bytes consed. NIL vs: * (setf r (makearray 100 :elementtype 'doublefloat)) #(0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0) * (time (dotimes (i 100000000) (setf (aref r (random 100)) (random 1.0d0)))) Evaluation took: 37.98 seconds of real time 32.61 seconds of user run time 3.75 seconds of system run time [Run times include 0.7 seconds GC run time] 0 page faults and 1599880616 bytes consed. NIL This implies to me that if I need to do a LOT of manipulating the entries of vectors, I'm better off moving them to an array, doing the manipulation there, then moving them back into matrix form to do matrix operations. Is this essentially correct, or am I missing something? I guess the alternate approach is to extend matlisp myself to provide "unsafe" accessors that assume more  I'm willing to tell it I'm using a real matrix to avoid the generic function dispatch, and I'm willing to just run an aref directly on the store component (although the store isn't currently exported by matlisp). Cheers, rif 
From: Raymond Toy <toy@rt...>  20021101 18:49:23

>>>>> "rif" == rif <rif@...> writes: rif> Duh. I looked at the LAPACK manual, then forgot to report. Assuming rif> we're storing the entire matrix (note we could also save another rif> factor of two by using "packed" storage which exploits the symmetry of rif> the matrix, at the cost of pain through the rest of Matlisp), the rif> Lapack prefix is PO. So in double precision, I Cholesky factor with rif> DPOTRF, solve with DPOTRS, etc. The various storage formats could be accomodated, but I have somewhat decided against supporting them. Once that happens, then you'll want to add this packed matrix with that packed matrix and expect the appropriate packed or unpacked result. :) For N types of storage methods, we get N^2 possible combinations, times M different operations, and that's just too many for me to do by hand. :) I would be willing however to support the packed storage formats, but only as far as matrixref is concerned. However, if you are willing to do the work and contribute code, we'd be happy to incorporate it. :) Ray 
From: Raymond Toy <toy@rt...>  20021101 18:41:01

>>>>> "rif" == rif <rif@...> writes: rif> 1. The website http://matlisp.sourceforge.net/ mentions Cholesky rif> factorizations in the very first paragraph. I agree that technically rif> the paragraph is correct, because it mentions Cholesky as part of the rif> "specialized and well documented linear algebra routines", but this is rif> still somewhat misleading. If you have suggestions on what to say, please send them here and I'll try to get them incorporated. rif> 2. (I'm willing to do this myself if someone gives me permissions) rif> The installation instructions on the website are wrong, and it would rif> be five minutes well spent to change them to tell people to read the rif> INSTALL file in the distro. Ok. I'll see if I can change this. Perhaps Tunc can do this. I've never touched the matlisp home page so I don't know how. rif> 3. What is the license on this software? Looking at the source, it rif> appears to be (C) Regents of the University of California, with what rif> reads sort of like a BSD license, but not quite. Isn't this the standard BSD license? Tunc wrote that. rif> 4. Is it a bug that (at least under CMUCL) mref is not setfable? i.e.: rif> * (setf (matrixref a 0 1) 3) rif> 3.0d0 rif> * (setf (mref a 0 1) 4) rif> Warning: This function is undefined: rif> (SETF MREF) Since this is defined in compat.lisp, I suspect it was originally called mref, but was later changed to matrixref. I would say mref is deprecated; use matrixref. Ray 
From: Raymond Toy <toy@rt...>  20021101 18:40:46

>>>>> "rif" == rif <rif@...> writes: rif> Yeah, I'm on x86 right now. I agree that it only buys me memory, but rif> it buys me a lot of memory, which in turn leads to the ability to deal rif> with systems that are O(sqrt(n)) larger. On the other hand, I do rif> agree that it's not worth it if it's a lot of tedious work. Somehow, I think by making the systems O(sqrt(n)) larger, you are probably getting much worse results because the the condition number of the matrix is much worse with singleprecision. But I'm not a matrix expert. rif> In a related question, how do I save a matrix of doubles to a file rif> (under CMUCL)? For arrays of floats, I'm using something like rif> (writebyte (kernel:singlefloatbits (aref arr i j)) str) How do you use singlefloat arrays with Matlisp? Do you convert them back and forth as needed? rif> What's the equivalent for matlisp matrices? I want to read and store rif> them in files. Matlisp stores matrices in memory as standard Lisp (simplearray doublefloat (*)). There aren't any routines to read or write matrices to files other than standard Lisp. Perhaps there should be? You could just save a core file which should save all of your arrays which you can reload later. Ray 
From: rif <rif@MIT.EDU>  20021101 18:33:34

Duh. I looked at the LAPACK manual, then forgot to report. Assuming we're storing the entire matrix (note we could also save another factor of two by using "packed" storage which exploits the symmetry of the matrix, at the cost of pain through the rest of Matlisp), the Lapack prefix is PO. So in double precision, I Cholesky factor with DPOTRF, solve with DPOTRS, etc. Cheers, rif 
From: rif <rif@MIT.EDU>  20021101 18:26:42

> >>>>> "rif" == rif <rif@...> writes: > > rif> I assume it would be a lot of work to expose the ability to do work in > rif> singleprecision floating point rather than double? > > I suppose with some hacking of the macros we could make it work, > assuming that singleprecision routines always want singleprecision > arrays and numbers. > > This would be a bit tedious, and I'm reluctant to do this, however. I > decided long ago that doubleprecision almost always allowed me to > ignore roundoff issues for the things I do. The extra time and > memory were not an issue. Are you on an x86? Then singleprecision > only buys you memory. > > Ray Yeah, I'm on x86 right now. I agree that it only buys me memory, but it buys me a lot of memory, which in turn leads to the ability to deal with systems that are O(sqrt(n)) larger. On the other hand, I do agree that it's not worth it if it's a lot of tedious work. In a related question, how do I save a matrix of doubles to a file (under CMUCL)? For arrays of floats, I'm using something like (writebyte (kernel:singlefloatbits (aref arr i j)) str) What's the equivalent for matlisp matrices? I want to read and store them in files. Cheers, rif 
From: Raymond Toy <toy@rt...>  20021101 18:20:41

>>>>> "rif" == rif <rif@...> writes: rif> I assume it would be a lot of work to expose the ability to do work in rif> singleprecision floating point rather than double? I suppose with some hacking of the macros we could make it work, assuming that singleprecision routines always want singleprecision arrays and numbers. This would be a bit tedious, and I'm reluctant to do this, however. I decided long ago that doubleprecision almost always allowed me to ignore roundoff issues for the things I do. The extra time and memory were not an issue. Are you on an x86? Then singleprecision only buys you memory. Ray 
From: rif <rif@MIT.EDU>  20021101 18:14:04

1. The website http://matlisp.sourceforge.net/ mentions Cholesky factorizations in the very first paragraph. I agree that technically the paragraph is correct, because it mentions Cholesky as part of the "specialized and well documented linear algebra routines", but this is still somewhat misleading. 2. (I'm willing to do this myself if someone gives me permissions) The installation instructions on the website are wrong, and it would be five minutes well spent to change them to tell people to read the INSTALL file in the distro. 3. What is the license on this software? Looking at the source, it appears to be (C) Regents of the University of California, with what reads sort of like a BSD license, but not quite. 4. Is it a bug that (at least under CMUCL) mref is not setfable? i.e.: * (setf (matrixref a 0 1) 3) 3.0d0 * (setf (mref a 0 1) 4) Warning: This function is undefined: (SETF MREF) Error in KERNEL:%COERCETOFUNCTION: the function (SETF MREF) is undefined. Restarts: 0: [ABORT] Return to TopLevel. This makes me just want to ignore mref entirely, whereas otherwise I'd always use it in preference to matrixref because it's shorter. Or am I just missing something? Cheers, rif 
From: rif <rif@MIT.EDU>  20021101 18:13:26

Sorry, I should've responded sooner, I've been busy with some other things. I was actually just looking at the LAPACK documentation when your mail came in. Nicholas Neuss' suggestion allows me to work reasonably. Exposing the Cholesky operations directly would save about a factor of 2 in computational costs and I believe be more numerically stable. It doesn't seem critically important though. I assume it would be a lot of work to expose the ability to do work in singleprecision floating point rather than double? rif 