|
From: rif <ri...@MI...> - 2002-10-29 16:55:50
|
Yes, R\t means solve this matrix, but Matlab/Octave are able to take advantage of the fact that R is upper or lower triangular, so solving each triangular system takes O(n^2) rather than O(n^3) operations (you pay the O(n^3) once when you factor the matrix). I guess I'm a little confused. If we don't already have operations like this, what's the point of exposing LU from Lapack? AFAIK, the reason to do an LU decomposition is so that I can then use it to solve systems in time O(n^2)... rif > If R\t means R^(-1)*t, the (m/ t r) will do that. However, I think > that's probably rather expensive because it probably will use Gaussian > elimination to solve this set of equations. Some other special > routine from LAPACK should probably be used. > > Will have to dig through LAPACK.... > > Ray |
|
From: Michael A. K. <ma...@ll...> - 2002-10-29 17:03:48
|
Rif, I should read and write more slowly. WRT your first posting, the solution for multiple RHS would be to use (GETRS ...) or (GETRS! ...) mike |
|
From: Nicolas N. <Nic...@iw...> - 2002-10-30 09:47:09
|
"Michael A. Koerber" <ma...@ll...> writes:
> Rif,
>
> I should read and write more slowly. WRT your first posting,
> the solution for multiple RHS would be to use (GETRS ...) or (GETRS! ...)
>
> mike
Yes, something like
(multiple-value-bind (lu ipiv info)
(getrf! (copy mat))
(declare (ignore info))
(getrs! lu ipiv rhs))
should work in recent CVS versions.
Nicolas.
|
|
From: rif <ri...@MI...> - 2002-11-01 18:13:26
|
Sorry, I should've responded sooner, I've been busy with some other things. I was actually just looking at the LAPACK documentation when your mail came in. Nicholas Neuss' suggestion allows me to work reasonably. Exposing the Cholesky operations directly would save about a factor of 2 in computational costs and I believe be more numerically stable. It doesn't seem critically important though. I assume it would be a lot of work to expose the ability to do work in single-precision floating point rather than double? rif |
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:20:41
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I assume it would be a lot of work to expose the ability to do work in
rif> single-precision floating point rather than double?
I suppose with some hacking of the macros we could make it work,
assuming that single-precision routines always want single-precision
arrays and numbers.
This would be a bit tedious, and I'm reluctant to do this, however. I
decided long ago that double-precision almost always allowed me to
ignore round-off issues for the things I do. The extra time and
memory were not an issue. Are you on an x86? Then single-precision
only buys you memory.
Ray
|
|
From: rif <ri...@MI...> - 2002-11-01 18:26:42
|
> >>>>> "rif" == rif <ri...@MI...> writes: > > rif> I assume it would be a lot of work to expose the ability to do work in > rif> single-precision floating point rather than double? > > I suppose with some hacking of the macros we could make it work, > assuming that single-precision routines always want single-precision > arrays and numbers. > > This would be a bit tedious, and I'm reluctant to do this, however. I > decided long ago that double-precision almost always allowed me to > ignore round-off issues for the things I do. The extra time and > memory were not an issue. Are you on an x86? Then single-precision > only buys you memory. > > Ray Yeah, I'm on x-86 right now. I agree that it only buys me memory, but it buys me a lot of memory, which in turn leads to the ability to deal with systems that are O(sqrt(n)) larger. On the other hand, I do agree that it's not worth it if it's a lot of tedious work. In a related question, how do I save a matrix of doubles to a file (under CMUCL)? For arrays of floats, I'm using something like (write-byte (kernel:single-float-bits (aref arr i j)) str) What's the equivalent for matlisp matrices? I want to read and store them in files. Cheers, rif |
|
From: Raymond T. <to...@rt...> - 2002-11-01 18:40:46
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Yeah, I'm on x-86 right now. I agree that it only buys me memory, but
rif> it buys me a lot of memory, which in turn leads to the ability to deal
rif> with systems that are O(sqrt(n)) larger. On the other hand, I do
rif> agree that it's not worth it if it's a lot of tedious work.
Somehow, I think by making the systems O(sqrt(n)) larger, you are
probably getting much worse results because the the condition number
of the matrix is much worse with single-precision.
But I'm not a matrix expert.
rif> In a related question, how do I save a matrix of doubles to a file
rif> (under CMUCL)? For arrays of floats, I'm using something like
rif> (write-byte (kernel:single-float-bits (aref arr i j)) str)
How do you use single-float arrays with Matlisp? Do you convert them
back and forth as needed?
rif> What's the equivalent for matlisp matrices? I want to read and store
rif> them in files.
Matlisp stores matrices in memory as standard Lisp (simple-array
double-float (*)). There aren't any routines to read or write
matrices to files other than standard Lisp. Perhaps there should be?
You could just save a core file which should save all of your arrays
which you can reload later.
Ray
|
|
From: Raymond T. <to...@rt...> - 2002-10-29 17:00:57
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
It means I was too lazy to either look up the necessary routines or
was too lazy to add the necessary smarts because I didn't need them.
Ray
|
|
From: Raymond T. <to...@rt...> - 2002-10-29 17:05:03
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> Yes, R\t means solve this matrix, but Matlab/Octave are able to take
rif> advantage of the fact that R is upper or lower triangular, so solving
rif> each triangular system takes O(n^2) rather than O(n^3) operations (you
rif> pay the O(n^3) once when you factor the matrix).
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
I also wanted to say that if you know the name of the BLAS or LAPACK
routines that perform Cholesky decomposition and solution, please
point them out and I can create the necessary FFI for them.
Ray
|
|
From: Raymond T. <to...@rt...> - 2002-11-01 17:50:36
|
>>>>> "rif" == rif <ri...@MI...> writes:
rif> I guess I'm a little confused. If we don't already have operations
rif> like this, what's the point of exposing LU from Lapack? AFAIK, the
rif> reason to do an LU decomposition is so that I can then use it to solve
rif> systems in time O(n^2)...
Did the messages from Michael Koerber and Nicolas Neuss give you what
you need? If not, please let us know, and we see what we can do for
you.
Ray
|