You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
(2) |
Jun
(2) |
Jul
(3) |
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(8) |
Aug
|
Sep
|
Oct
(5) |
Nov
|
Dec
|
| 2002 |
Jan
(2) |
Feb
(7) |
Mar
(14) |
Apr
|
May
|
Jun
(16) |
Jul
(7) |
Aug
(5) |
Sep
(28) |
Oct
(9) |
Nov
(26) |
Dec
(3) |
| 2003 |
Jan
|
Feb
(6) |
Mar
(4) |
Apr
(16) |
May
|
Jun
(8) |
Jul
(1) |
Aug
(2) |
Sep
(2) |
Oct
(33) |
Nov
(13) |
Dec
|
| 2004 |
Jan
(2) |
Feb
(16) |
Mar
|
Apr
(2) |
May
(35) |
Jun
(8) |
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(8) |
Dec
(21) |
| 2005 |
Jan
(7) |
Feb
|
Mar
|
Apr
(1) |
May
(8) |
Jun
(4) |
Jul
(5) |
Aug
(18) |
Sep
(2) |
Oct
|
Nov
(3) |
Dec
(31) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
(1) |
Jun
(7) |
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
(1) |
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
(2) |
Apr
(11) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(10) |
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
(2) |
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(5) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2012 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2013 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(1) |
| 2014 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2024 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: <ma...@ll...> - 2002-11-21 14:26:54
|
Jeff,
below is your PINV code with some minor adjustments. The need for
the extra CTRANSPOSE was probably due to MATLISP returning V^H, not
V as Octave does. Also note that CTRANSPOSE of S is not needed
since the singular values are real.
Ray,
I added some comments to the PINV code, written by Jeff, and I'd like to
suggest that it be added to MATLISP.
tnx
mike
**************************************************
Dr Michael A. Koerber Micro$oft Free Zone
MIT/Lincoln Laboratory
(defun pinv (a &optional (flag :svd))
"
SYNTAX
======
(PINV2 A :FLAG)
INPUT
-----
A A Matlisp matrix, M x N
FLAG A key indicating the type of computation to use for
the pseudo-inverse. Default :SVD
:LS Force least squares interpretation
:SVD Force SVD based computation (default)
OUTPUT
------
PINV Pseudo-inverse, a Matlisp matrix.
DESCRIPTION
===========
Given the equation Ax = b, solve for the value of A^+ which will
solve x = A^+ b. A is assumed M x N. If A is M x M, and full rank,
use (M:M/ A B).
Use FLAG :LS when M > N and rank(A) = N. This is the canonical least
squares problem. In this case A^+ = inv(A^H A) A^H.
Use FLAG :SVD when rank(A) = r < N. For M < N, this is the
underconstrained case for which A^+ is the minium norm solution.
To compute pinv, A^+, in general, let A have an SVD
A = USV^H
define S^+ = diag(1/s1, 1/s2, ... 1/sr, 0 ... 0) where r is rank(A)
Then the p-inv, A^+ = V S^+ U^H."
(cond
((eq flag :ls)
;; force the least squares interpretation.
(let ((ah (ctranspose a)))
(m* (m/ (m* ah a)) ah)))
((eq flag :svd)
;; use svd method
(multiple-value-bind(up sp vptranspose info)
(svd a :s) ; use economy SVD
(if (null info) (error "Computation of SVD failed."))
;; form the reciprocal of the singular values
(dotimes (n (min (number-of-rows sp) (number-of-cols sp)))
(setf (matrix-ref sp n n) (/ (matrix-ref sp n n))))
;; compute the pinv
(m* (ctranspose vptranspose) (m* sp (ctranspose up)))))
(t (error "Invalid Flag (~A) passed to PINV." flag))))
|
|
From: Jefferson P. <jp...@cs...> - 2002-11-19 07:35:19
|
Is there a pseudoinverse function in matlisp? It looks like there isn't, but maybe I missed it. How hard would it be to write? J. |
|
From: Raymond T. <to...@rt...> - 2002-11-06 14:12:30
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Basically, I'm reading many many 28 dimensional vectors from a file,
rif> doing a bunch of operations on each 28 dimensional to turn it into a
rif> ~4,500 dimensional vector, then keeping a running sum (eventually an
rif> average when I'm done with the file and I know the count) of these
rif> high-dimensional vectors. My hope is to reuse only two
rif> high-dimensional locations, one for the "current" vector and one for
rif> the sum, to avoid excessive consing. If I want to reuse a matlisp
rif> matrix, this means many calls to mref. If the store were exposed,
rif> then I could implement it directly as calls to aref.
In this case, I would keep the high-dimensional vector as a Lisp
vector. When I'm done, I'd convert it to a Matlisp vector.
But feel free to use store. It's very, very, unlikely to change.
Ray
|
|
From: rif <ri...@MI...> - 2002-11-05 23:40:40
|
Basically, I'm reading many many 28 dimensional vectors from a file, doing a bunch of operations on each 28 dimensional to turn it into a ~4,500 dimensional vector, then keeping a running sum (eventually an average when I'm done with the file and I know the count) of these high-dimensional vectors. My hope is to reuse only two high-dimensional locations, one for the "current" vector and one for the sum, to avoid excessive consing. If I want to reuse a matlisp matrix, this means many calls to mref. If the store were exposed, then I could implement it directly as calls to aref. (If you're interested in the details, I'm basically nonlinearly mapping the original vector to a higher dimensional space where the dimensions are the products of up to three dimensions of the original vector. It's quite possible that the cost of this function dwarfs the cost of the consing, making my whole discussion premature optimization). Cheers, rif |
|
From: Raymond T. <to...@rt...> - 2002-11-05 23:03:27
|
>>>>> "rif" == rif <ri...@MI...> writes:
[slow matrix-ref example snipped]
rif> This implies to me that if I need to do a LOT of manipulating the
rif> entries of vectors, I'm better off moving them to an array, doing the
Can you describe what kinds of manipulations you want to do?
rif> manipulation there, then moving them back into matrix form to do
rif> matrix operations. Is this essentially correct, or am I missing
rif> something? I guess the alternate approach is to extend matlisp myself
Yes, this is correct. matrix-ref hides the implementation details.
If we ever do packed storage, matrix-ref would hide that fact too.
The intent, however, is that, if you can, use the LAPACK routines to
do the manipulations, as you would in Matlab.
rif> to provide "unsafe" accessors that assume more --- I'm willing to tell
rif> it I'm using a real matrix to avoid the generic function dispatch, and
rif> I'm willing to just run an aref directly on the store component
rif> (although the store isn't currently exported by matlisp).
We can make store exported without any problems. :-) Store is very
unlikely to change since it's used everywhere to get at the underlying
array.
If you're operating on all of the elements of a vector, you may want
to look at matrix-map. It will run your selected function over each
element of the vector and uses store and aref to do it, so it's about
as efficient as as doing it by hand.
Ray
|
|
From: rif <ri...@MI...> - 2002-11-05 22:47:59
|
My informal experiments seems to indicate that matrix-ref is much
slower than aref. In particular (not that this is suprising, since
matrix-ref seems to be doing much more):
* (setf r (make-real-matrix 1 100))
#<REAL-MATRIX 1 x 100
0.0000 0.0000 0.0000 0.0000 ... 0.0000 >
* (time (dotimes (i 100000000) (setf (matrix-ref r (random 100)) (random 1.0d0))))
Evaluation took:
201.49 seconds of real time
164.01 seconds of user run time
30.88 seconds of system run time
[Run times include 4.38 seconds GC run time]
0 page faults and
12799208008 bytes consed.
NIL
vs:
* (setf r (make-array 100 :element-type 'double-float))
#(0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0
0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0 0.0d0)
* (time (dotimes (i 100000000) (setf (aref r (random 100)) (random 1.0d0))))
Evaluation took:
37.98 seconds of real time
32.61 seconds of user run time
3.75 seconds of system run time
[Run times include 0.7 seconds GC run time]
0 page faults and
1599880616 bytes consed.
NIL
This implies to me that if I need to do a LOT of manipulating the
entries of vectors, I'm better off moving them to an array, doing the
manipulation there, then moving them back into matrix form to do
matrix operations. Is this essentially correct, or am I missing
something? I guess the alternate approach is to extend matlisp myself
to provide "unsafe" accessors that assume more --- I'm willing to tell
it I'm using a real matrix to avoid the generic function dispatch, and
I'm willing to just run an aref directly on the store component
(although the store isn't currently exported by matlisp).
Cheers,
rif
|
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:49:23
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Duh. I looked at the LAPACK manual, then forgot to report. Assuming
rif> we're storing the entire matrix (note we could also save another
rif> factor of two by using "packed" storage which exploits the symmetry of
rif> the matrix, at the cost of pain through the rest of Matlisp), the
rif> Lapack prefix is PO. So in double precision, I Cholesky factor with
rif> DPOTRF, solve with DPOTRS, etc.
The various storage formats could be accomodated, but I have somewhat
decided against supporting them. Once that happens, then you'll want
to add this packed matrix with that packed matrix and expect the
appropriate packed or unpacked result. :-)
For N types of storage methods, we get N^2 possible combinations,
times M different operations, and that's just too many for me to do by
hand. :-)
I would be willing however to support the packed storage formats, but
only as far as matrix-ref is concerned.
However, if you are willing to do the work and contribute code, we'd
be happy to incorporate it. :-)
Ray
|
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:41:01
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> 1. The website http://matlisp.sourceforge.net/ mentions Cholesky
rif> factorizations in the very first paragraph. I agree that technically
rif> the paragraph is correct, because it mentions Cholesky as part of the
rif> "specialized and well documented linear algebra routines", but this is
rif> still somewhat misleading.
If you have suggestions on what to say, please send them here and I'll
try to get them incorporated.
rif> 2. (I'm willing to do this myself if someone gives me permissions)
rif> The installation instructions on the website are wrong, and it would
rif> be five minutes well spent to change them to tell people to read the
rif> INSTALL file in the distro.
Ok. I'll see if I can change this. Perhaps Tunc can do this. I've
never touched the matlisp home page so I don't know how.
rif> 3. What is the license on this software? Looking at the source, it
rif> appears to be (C) Regents of the University of California, with what
rif> reads sort of like a BSD license, but not quite.
Isn't this the standard BSD license? Tunc wrote that.
rif> 4. Is it a bug that (at least under CMUCL) mref is not setf-able? i.e.:
rif> * (setf (matrix-ref a 0 1) 3)
rif> 3.0d0
rif> * (setf (mref a 0 1) 4)
rif> Warning: This function is undefined:
rif> (SETF MREF)
Since this is defined in compat.lisp, I suspect it was originally
called mref, but was later changed to matrix-ref. I would say mref is
deprecated; use matrix-ref.
Ray
|
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:40:46
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Yeah, I'm on x-86 right now. I agree that it only buys me memory, but
rif> it buys me a lot of memory, which in turn leads to the ability to deal
rif> with systems that are O(sqrt(n)) larger. On the other hand, I do
rif> agree that it's not worth it if it's a lot of tedious work.
Somehow, I think by making the systems O(sqrt(n)) larger, you are
probably getting much worse results because the the condition number
of the matrix is much worse with single-precision.
But I'm not a matrix expert.
rif> In a related question, how do I save a matrix of doubles to a file
rif> (under CMUCL)? For arrays of floats, I'm using something like
rif> (write-byte (kernel:single-float-bits (aref arr i j)) str)
How do you use single-float arrays with Matlisp? Do you convert them
back and forth as needed?
rif> What's the equivalent for matlisp matrices? I want to read and store
rif> them in files.
Matlisp stores matrices in memory as standard Lisp (simple-array
double-float (*)). There aren't any routines to read or write
matrices to files other than standard Lisp. Perhaps there should be?
You could just save a core file which should save all of your arrays
which you can reload later.
Ray
|
|
From: rif <ri...@MI...> - 2002-11-01 18:33:34
|
Duh. I looked at the LAPACK manual, then forgot to report. Assuming we're storing the entire matrix (note we could also save another factor of two by using "packed" storage which exploits the symmetry of the matrix, at the cost of pain through the rest of Matlisp), the Lapack prefix is PO. So in double precision, I Cholesky factor with DPOTRF, solve with DPOTRS, etc. Cheers, rif |
|
From: rif <ri...@MI...> - 2002-11-01 18:26:42
|
> >>>>> "rif" == rif <ri...@MI...> writes: > > rif> I assume it would be a lot of work to expose the ability to do work in > rif> single-precision floating point rather than double? > > I suppose with some hacking of the macros we could make it work, > assuming that single-precision routines always want single-precision > arrays and numbers. > > This would be a bit tedious, and I'm reluctant to do this, however. I > decided long ago that double-precision almost always allowed me to > ignore round-off issues for the things I do. The extra time and > memory were not an issue. Are you on an x86? Then single-precision > only buys you memory. > > Ray Yeah, I'm on x-86 right now. I agree that it only buys me memory, but it buys me a lot of memory, which in turn leads to the ability to deal with systems that are O(sqrt(n)) larger. On the other hand, I do agree that it's not worth it if it's a lot of tedious work. In a related question, how do I save a matrix of doubles to a file (under CMUCL)? For arrays of floats, I'm using something like (write-byte (kernel:single-float-bits (aref arr i j)) str) What's the equivalent for matlisp matrices? I want to read and store them in files. Cheers, rif |
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:20:41
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I assume it would be a lot of work to expose the ability to do work in
rif> single-precision floating point rather than double?
I suppose with some hacking of the macros we could make it work,
assuming that single-precision routines always want single-precision
arrays and numbers.
This would be a bit tedious, and I'm reluctant to do this, however. I
decided long ago that double-precision almost always allowed me to
ignore round-off issues for the things I do. The extra time and
memory were not an issue. Are you on an x86? Then single-precision
only buys you memory.
Ray
|
|
From: rif <ri...@MI...> - 2002-11-01 18:14:04
|
1. The website http://matlisp.sourceforge.net/ mentions Cholesky factorizations in the very first paragraph. I agree that technically the paragraph is correct, because it mentions Cholesky as part of the "specialized and well documented linear algebra routines", but this is still somewhat misleading. 2. (I'm willing to do this myself if someone gives me permissions) The installation instructions on the website are wrong, and it would be five minutes well spent to change them to tell people to read the INSTALL file in the distro. 3. What is the license on this software? Looking at the source, it appears to be (C) Regents of the University of California, with what reads sort of like a BSD license, but not quite. 4. Is it a bug that (at least under CMUCL) mref is not setf-able? i.e.: * (setf (matrix-ref a 0 1) 3) 3.0d0 * (setf (mref a 0 1) 4) Warning: This function is undefined: (SETF MREF) Error in KERNEL:%COERCE-TO-FUNCTION: the function (SETF MREF) is undefined. Restarts: 0: [ABORT] Return to Top-Level. This makes me just want to ignore mref entirely, whereas otherwise I'd always use it in preference to matrix-ref because it's shorter. Or am I just missing something? Cheers, rif |
|
From: rif <ri...@MI...> - 2002-11-01 18:13:26
|
Sorry, I should've responded sooner, I've been busy with some other things. I was actually just looking at the LAPACK documentation when your mail came in. Nicholas Neuss' suggestion allows me to work reasonably. Exposing the Cholesky operations directly would save about a factor of 2 in computational costs and I believe be more numerically stable. It doesn't seem critically important though. I assume it would be a lot of work to expose the ability to do work in single-precision floating point rather than double? rif |
|
From: Raymond T. <to...@rt...> - 2002-11-01 17:50:36
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
Did the messages from Michael Koerber and Nicolas Neuss give you what
you need? If not, please let us know, and we see what we can do for
you.
Ray
|
|
From: Nicolas N. <Nic...@iw...> - 2002-10-30 09:47:09
|
"Michael A. Koerber" <ma...@ll...> writes:
> Rif,
>
> I should read and write more slowly. WRT your first posting,
> the solution for multiple RHS would be to use (GETRS ...) or (GETRS! ...)
>
> mike
Yes, something like
(multiple-value-bind (lu ipiv info)
(getrf! (copy mat))
(declare (ignore info))
(getrs! lu ipiv rhs))
should work in recent CVS versions.
Nicolas.
|
|
From: Raymond T. <to...@rt...> - 2002-10-29 17:05:03
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Yes, R\t means solve this matrix, but Matlab/Octave are able to take
rif> advantage of the fact that R is upper or lower triangular, so solving
rif> each triangular system takes O(n^2) rather than O(n^3) operations (you
rif> pay the O(n^3) once when you factor the matrix).
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
I also wanted to say that if you know the name of the BLAS or LAPACK
routines that perform Cholesky decomposition and solution, please
point them out and I can create the necessary FFI for them.
Ray
|
|
From: Michael A. K. <ma...@ll...> - 2002-10-29 17:03:48
|
Rif, I should read and write more slowly. WRT your first posting, the solution for multiple RHS would be to use (GETRS ...) or (GETRS! ...) mike |
|
From: Raymond T. <to...@rt...> - 2002-10-29 17:00:57
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
It means I was too lazy to either look up the necessary routines or
was too lazy to add the necessary smarts because I didn't need them.
Ray
|
|
From: Michael A. K. <ma...@ll...> - 2002-10-29 17:00:05
|
> If R\t means R^(-1)*t, the (m/ t r) will do that. However, I think > that's probably rather expensive because it probably will use Gaussian > elimination to solve this set of equations. Some other special > routine from LAPACK should probably be used. (m/ A B) uses the LU decomposition routines. Not also the recent addition made (GETRS! ... ) uses LU. mike |
|
From: rif <ri...@MI...> - 2002-10-29 16:55:50
|
Yes, R\t means solve this matrix, but Matlab/Octave are able to take advantage of the fact that R is upper or lower triangular, so solving each triangular system takes O(n^2) rather than O(n^3) operations (you pay the O(n^3) once when you factor the matrix). I guess I'm a little confused. If we don't already have operations like this, what's the point of exposing LU from Lapack? AFAIK, the reason to do an LU decomposition is so that I can then use it to solve systems in time O(n^2)... rif > If R\t means R^(-1)*t, the (m/ t r) will do that. However, I think > that's probably rather expensive because it probably will use Gaussian > elimination to solve this set of equations. Some other special > routine from LAPACK should probably be used. > > Will have to dig through LAPACK.... > > Ray |
|
From: Raymond T. <to...@rt...> - 2002-10-29 16:47:26
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Nearly all the matrices I work with are positive semidefinite. Does
rif> Matlisp have a Cholesky factorization routine?
Yes and no. LAPACK has one, I think. Matlisp doesn't because no one
has written the FFI for it.
rif> Also, what is the best way to solve a bunch of problems of the form Ax
rif> = b, where A is positive semidefinite and the b's are not known ahead
rif> of time? In Octave, I would say:
rif> R = chol(A);
rif> and, once I obtained a b, I would solve via:
rif> t = R'\b;
rif> x = R\t;
rif> What is the Matlisp equivalent to this approach?
If R\t means R^(-1)*t, the (m/ t r) will do that. However, I think
that's probably rather expensive because it probably will use Gaussian
elimination to solve this set of equations. Some other special
routine from LAPACK should probably be used.
Will have to dig through LAPACK....
Ray
|
|
From: rif <ri...@MI...> - 2002-10-29 16:14:06
|
Nearly all the matrices I work with are positive semidefinite. Does Matlisp have a Cholesky factorization routine? Also, what is the best way to solve a bunch of problems of the form Ax = b, where A is positive semidefinite and the b's are not known ahead of time? In Octave, I would say: R = chol(A); and, once I obtained a b, I would solve via: t = R'\b; x = R\t; What is the Matlisp equivalent to this approach? Cheers, rif |
|
From: rif <ri...@MI...> - 2002-10-11 04:57:33
|
I tried saving a corefile with (save-matlisp). The corefile seems to save fine, but if I start up a new cmulisp with that corefile, there's no help function. Everything else seems to be there (or at least a bunch of other functions that I tried), but help is undefined. It may or may not be relevant that I am using the debian cmucl, which has a built in "help" function, and so in my .cmucl-init, I load the matlisp/start.lisp file, then I (unintern 'help) (use-package :matlisp) Any ideas how to get the help back in my core file? Cheers, rif |
|
From: Raymond T. <to...@rt...> - 2002-09-30 14:40:17
|
>>>>> "Joseph" == Joseph Dale <jd...@uc...> writes:
Joseph> Michael A. Koerber wrote:
>>> I've just installed matlisp from the latest CVS at
>>> sourceforge. From looking at the online help and reading some of
>>> the source, it looks to me like none of the wrappers around matrix
>>> inversion functions (for example, DGETRI and ZGETRI in LAPACK) have
>>> been implemented. Is there any particular reason for this omission,
>>> or is it just lack of time? Or am I totally missing something?
>> Joe,
>> I don't see such a wrapper either. However, matrix inversion is
>> currently performed with ZGESV by calling the MATLISP function M/ with
>> a single argument. So if you just need the inverse of A use (M/ A).
>> Of course, if its ?GETRI specifically, a wrapper will be needed.
>>
Joseph> Thanks Michael and Tunc,
Joseph> M/ should be sufficient for me. However, there appears to be a
Joseph> bug/typo in the documentation:
Thanks.
I'll fix this.
Ray
|