lapackpp-devel Mailing List for Lapack++ (Page 12)
Status: Beta
Brought to you by:
cstim
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(19) |
Sep
(11) |
Oct
|
Nov
(4) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(2) |
Feb
(4) |
Mar
(32) |
Apr
(18) |
May
(3) |
Jun
|
Jul
(1) |
Aug
(4) |
Sep
(13) |
Oct
(5) |
Nov
|
Dec
(1) |
2006 |
Jan
|
Feb
(6) |
Mar
(2) |
Apr
(6) |
May
(18) |
Jun
(15) |
Jul
(17) |
Aug
(45) |
Sep
(3) |
Oct
(4) |
Nov
(26) |
Dec
(4) |
2007 |
Jan
(11) |
Feb
(14) |
Mar
(1) |
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(2) |
2008 |
Jan
|
Feb
(2) |
Mar
|
Apr
(4) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2010 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(7) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
(2) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
|
Nov
(14) |
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
(2) |
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Christian S. <sti...@tu...> - 2005-05-17 00:08:11
|
Hi, 1. Please direct all questions to the project mailing list=20 lap...@li..., *never* to individual developers. 2. If you got linker errors about particular functions, you should check=20 whether these (zgesdd) exists anyway in the LAPACK.dll that you are using.= =20 I'm using the original LAPACK sources with our patches included as describe= d=20 in README.w32, and this works fine. Christian Am Montag, 9. Mai 2005 17:27 schrieb Oscar Fern=E1ndez Garcia: > Hello, > > I'm triying to compile lapackpp with msys and mingw as it's written on > files README and README.w32. I'm using BLAS.dll and LAPACK.dll from > http://cvs.sourceforge.net/viewcvs.py/matlisp/matlisp/lib/Attic/. > Configure finish without any problem, but in compilation with make, the > linker doesn't find the functions zgesdd_ dgesdd_. It's problem of > version of LAPACK.dll or another problem? I also include inside a trace > of the results. > > Thanks, > ofe...@to... > > > creating liblapackpp.la > (cd .libs && rm -f liblapackpp.la && ln -s ../liblapackpp.la > liblapackpp.la) windres -i ressource.rc -o ressource.o > mkdir -p dlldir && \ > ( \ > if test -r liblapackpp.a; then \ > SRCLIB=3Dliblapackpp.a; \ > else \ > SRCLIB=3D.libs/liblapackpp.a; \ > fi; \ > cd dlldir && ar x "../${SRCLIB}" && \ > ( ls ../.libs/*.o && cp ../.libs/*.o .) && \ > dlltool -e ../exports.o *.o && \ > dllwrap \ > --export-all \ > --output-def ../lapackpp32.def \ > --implib ../lapackpp32.lib \ > --driver-name gcc \ > -o ../lapackpp32.dll \ > *.o ../exports.o \ > ../ressource.o -L/mingw/lib -L/c/WINNT/system32 -s -lstdc++ > -L/C/WINNT/sy stem32 -llapack -lblas -Lc:/MinGW/lib > -Lc:/WINNT/system32 -Lc:/MinGW/bin/../lib/ gcc-lib/mingw32/3.2.3 > -Lc:/MinGW/bin/../lib/gcc-lib -L/mingw/lib/gcc-lib/mingw32 /3.2.3 > -Lc:/MinGW/bin/../lib/gcc-lib/mingw32/3.2.3/../../../../mingw32/lib > -L/mi ngw/lib/gcc-lib/mingw32/3.2.3/../../../../mingw32/lib > -Lc:/MinGW/bin/../lib/gcc- lib/mingw32/3.2.3/../../.. > -L/mingw/lib/gcc-lib/mingw32/3.2.3/../../.. -lfrtbegi n -lg2c > -lmingw32 -lmoldname -lmingwex -lmsvcrt -luser32 -lkernel32 -ladvapi32 > - lshell32 \ > ) > ../.libs/dopla.o ../.libs/eigslv.o ../.libs/lasvd.o ../.libs/linslv= =2Eo > ../.libs/dtimmg.o ../.libs/laprefs.o ../.libs/lautil.o=20 > ../.libs/systime.o lasvd.o(.text+0x4d0): In function > `Z8LaSVD_IPR15LaGenMatComplexR14LaVectorDouble S0_S0_': > c:/users/ofernandez/bsd/lapackpp/src/../include/gmc.h:489: undefined > reference t o `zgesdd_' > lasvd.o(.text+0xd29): In function > `Z8LaSVD_IPR15LaGenMatComplexR14LaVectorDouble ': > c:/users/ofernandez/bsd/lapackpp/src/../include/gmc.h:489: undefined > reference t o `zgesdd_' > lasvd.o(.text+0x1523): In function > `Z8LaSVD_IPR14LaGenMatDoubleR14LaVectorDouble S0_S0_': > c:/users/ofernandez/bsd/lapackpp/src/../include/gmd.h:459: undefined > reference t o `dgesdd_' > lasvd.o(.text+0x16a1):c:/users/ofernandez/bsd/lapackpp/src/../include/gmd= =2Eh >:459: undefined reference to `dgesdd_' > lasvd.o(.text+0x1e11): In function > `Z8LaSVD_IPR14LaGenMatDoubleR14LaVectorDouble ': > c:/users/ofernandez/bsd/lapackpp/src/../include/gmd.h:459: undefined > reference t o `dgesdd_' > c:\MinGW\bin\dllwrap.exe: no export definition file provided. > Creating one, but that may not be what you want > c:\MinGW\bin\dllwrap.exe: gcc exited with status 1 > make[2]: *** [lapackpp32.dll] Error 1 > make[2]: Leaving directory `/c/users/ofernandez/bsd/lapackpp/src' > make[1]: *** [all-recursive] Error 1 > make[1]: Leaving directory `/c/users/ofernandez/bsd/lapackpp' > make: *** [all] Error 2 |
From: Christian S. <sti...@tu...> - 2005-05-16 13:44:04
|
Hi, 1. Please direct all questions to the project mailing list lap...@li..., *never* to individual developers. Hey, you are an open source project yourself, so you should know how this works! Am Donnerstag, 12. Mai 2005 10:13 schrieb Jorge: > Hi, yes the changes does not affect the way the things run under *nix. > I still need to compile with VC++ toolkit, I have just tried with MSVC 6.0. If you have any suggested source code changes that would fix your problems, then simply submit a patch. I might take some days until I respond to these, but after I double-checked that Unix/Linux is not affected, I would happily include these in the CVS and future releases. Christian |
From: Lee, Hong-M. <hl...@un...> - 2005-05-02 19:44:56
|
Hi All, I am a first-time user of Lapack++ and have trouble getting it to work. Mr. Stimming has kindly provided me with this mailing list. Hopefully I can obtain some helps here. Basically I am trying to utilize Lapack in my new projects using Microsoft Visual C++ .NET. I've had used Linpack and Lapack extensively in my Fortran programs but this would be my first application of the package with C++. After some search on the web I think Lapack++ is my best bet, among C++ templates for Lapack. First, I tried to used the pre-compiled version for Win32 system. However, as expected, it did not work due to unresolved externals. Thus I think I have to recompile and re-build the library. However, instead of reinventing the wheels, I would like to use whatever others may have created for the Visual C++.NET platform. Are there such an implementations? If so, where can I find it? If not, can someone please give me suggestions to build the library? I would be glad to contribute to the project if I can make it to work. Thanks. Regards, Hong-Ming Lee |
From: Christian S. <sti...@tu...> - 2005-04-30 17:20:16
|
Dear Hong-Ming, Am Samstag, 30. April 2005 00:12 schrieb Lee, Hong-Ming: > Could you please give me a hand on my attempt to utilize Lapack++ in my > Visual C++.NET projects? > > I've tried to use the pre-compiled version for Win32 system and failed > (expected?) due to errors such as: > unresolved external symbol "public: __thiscall > LaGenMatDouble::LaGenMatDouble(int,int)" (??0LaGenMatDouble@@QAE@HH@Z) > referenced in function "public: __thiscall > LaVectorDouble::LaVectorDouble(int)" (??0LaVectorDouble@@QAE@H@Z) These are expected. In C++, pre-compiled DLLs and any application can only be used together if both have been compiled by the same compiler. Yes, it is this way with C++. Really. Since the pre-compiled DLL has been created with the gcc compiler, the pre-compiled DLL can only be used with gcc compiled programs. It is therefore expected that this doesn't work with any other compiler. > Thus I think I have to recompile and re-build the library. However, > instead of reinventing the wheels, I would like to use whatever others may > have created for the Visual C++.NET platform. Are you aware of any such > implementations? If so, can you give me the contact info? You don't have to reinvent the wheel. It is absolutely intended that for any compiler different than gcc you have to compile the library on your own (because of C++). Compiling a library is a totally usual thing to do in Open-Source projects. After all, it's all about having access to the source... Did this work? If you encounter any compiling problems with other compilers, I would happily try to fix these problems. But the compiling itself can't be avoided. I think there is a project file for Microsoft Visual C++ compiler (version 7) included, but I think it hasn't been tested for a long time. Anyway, it should be almost ready still. And in any case, please discuss problems related to Lapack++ only on the mailing list lap...@li..., not with individual developers. Regards, Christian |
From: Christophe P. <pi...@tl...> - 2005-04-19 14:45:07
|
Dear Christian, Thank you for the answer. Christian Stimming wrote: > LaBandMatDouble A(m,n); > // fill A ..., then: > LaBandFactDouble AF(A); > LaBandMatFactorize(A, AF); > // A and AF both refer to the factorization, which is fine: > LaLinearSolve(AF, X, B); // ... and so on. It is first what I tried, but it seems that the factorize matrix and the pivot are not correct ( I compare the results with lapack77 code that is proved to be correct). When I print the content of each matrix AF doesn't have the correct form, and the pivot vector is completely incoherent. So I tried what you suggested last time, and redesigned a class based on the philosophy you used in gfqrc.h. The thing is I can get the correct matrix this time (I mean the correct factorize matrix), but the pivot is obviously erroneous (out of bounds references to the matrix). I don't think that my design was correct, it was a quick prototype for my problem, but the integer pointer to the memory have a strange behavior . I will keep looking at my implementation of this class, and if I managed to make it work, I will be glad to send it to you. Best regards, Christophe Picard > This does not mean that the current lapackpp code is correct in any > case, but at least it should work for you. The point here is that the > original matrix A is no longer being used after constructing the > factorization. > > The only real solution would be a full redesign of the factorization > classes, somehow along the lines of the gfqrc.h. But for now, the > above code should work for you. > > Best regards, > > Christian Stimming |
From: Christian S. <sti...@tu...> - 2005-04-19 09:03:14
|
Dear Christophe, Christophe Picard schrieb: > Thank you for your answer. > Actually, I am not really a good programmer, but I am quite familiar > with the usage of matrices for solving PDE. Actually, a lot of people > who are using LAPACK F77 use the banded matrix vs the full matrix for > the memory efficiency. Personally, when I use it, I overwrite the > original matrix since the only reason to use LU is because the solving > part is really fast when compare to GMRES or other iterative methods, > and the overhead of computing the factorization it is becoming > negligible when the solver is used a lot. I totally agree that this solving method is very efficient in specific situations, and I would certainly like to offer the correct program code in lapackpp. After looking at this in more detail, I'd say that what you wanted to do can be achieved by constructing the LaBandFactDouble object with the original matrix A as argument, so that the original matrix A is used as storage for the decomposition/factorization? LaBandMatDouble A(m,n); // fill A ..., then: LaBandFactDouble AF(A); LaBandMatFactorize(A, AF); // A and AF both refer to the factorization, which is fine: LaLinearSolve(AF, X, B); // ... and so on. This does not mean that the current lapackpp code is correct in any case, but at least it should work for you. The point here is that the original matrix A is no longer being used after constructing the factorization. The only real solution would be a full redesign of the factorization classes, somehow along the lines of the gfqrc.h. But for now, the above code should work for you. Best regards, Christian Stimming > I will try to see if I can write something. I am pretty interested in > solver, and I would really like to make this one working. > > Regards, > Christophe > > > Christian Stimming wrote: > >> Dear Christophe, >> >> Christophe Picard schrieb: >> >>> I am trying to use the banded matrices to solve pde, and I notice >>> that the >>> return arguments in the function LaBandMatFactorize are weird. >> >> >> >> Absolutely. Yes, in fact *all* the *FactDouble classes are pretty much >> messed up. In gfd.h, I added a comment which made this clear, but in >> the other Fact headers I forgot to add such a comment (will be >> included immediately). >> >> In other words, the design of these classes is screwed anyway. If you >> found a way to make them useable, then you would be very welcome to >> contribute your code to lapackpp. I will happily throw out the wrong >> code and replace it by your code. >> >>> I mean that >>> the pivot and upper/lower matrix should be save in the member B and >>> not in >>> the original matrix: >> >> >> >> Err, yes, but the whole class needs more thought. It depends on how >> you construct the LaBandFactDouble object. If you use the constructor >> LaBandFactDouble(LaBanMatDouble& A), then the member variable B >> references the same memory as A and the function LaBandMatFactorize >> will use the matrix A in-place, regardless whether A or B is passed as >> parameter. >> >> I'm not sure how such a factorization is used in practical numerics. >> It might well be possible that you should throw away the whole >> implementation of bfd.h and invent your own. On the other hand, I >> recently added the class LaGenQRFactComplex in gfqrc.h and I think its >> design is one possibility to efficiently model matrix factorization as >> a class. Maybe you can try to copy the implementation model of gfqrc.h >> into one that uses LaBandMatDouble, but that's just one possibility >> out of many. >> >>> So my question is in bfd.h : (...) >>> should it be replace with something more along this (even maybe overload >>> the function) >>> >>> 00138 inline void LaBandMatFactorize(LaBandMatDouble &A, >>> LaBandFactDouble >>> &AF) >>> 00139 { >>> 00140 integer n = A.size(1), m = n, LDA = A.gdim(0); >>> 00141 integer KL = A.subdiags(), KU = A.superdiags(), info=0; >>> 00142 >>> 00143 F77NAME(dgbtrf)(&m, &n, &KL, &KU, &AF.B(0,0), &LDA, >>> &(AF.pivot()(0)), &info); >>> >>> otherwise the factorization is not saved into AF but in A, making AF.B >>> useless, and the function LaLinearSolve unusable. >> >> >> >> Absolutely. Throw out the old errornous function and replace it by >> your own function, and I would happily include that in the official >> lapackpp repository. Especially if you have tested and verified your >> implementation. >> >>> This is just a question/remark. >> >> >> >> Thank you very much for this feedback. >> >> Regards, >> >> Christian Stimming >> > > |
From: Christian S. <sti...@tu...> - 2005-04-19 07:32:18
|
Ok, thank you very much. Jacob (Jack) Gryn schrieb: > Done, and committed to CVS. > > I tested the gmd one, I'm assuming the others will work, the only difference > is the complex one where it did .r and .i separately (but in the same loop). I agree, the complex case should be fine as well. By the way, did you think about the work size variable of your LaLUInverseIP code? http://sourceforge.net/mailarchive/message.php?msg_id=11404933 Christian > > Jack > > >>-----Original Message----- >>From: lap...@li... [mailto:lapackpp-devel- >>ad...@li...] On Behalf Of Christian Stimming >>Sent: April 18, 2005 4:03 PM >>To: Jacob (Jack) Gryn; lap...@li... >>Subject: Re: [Lapackpp-devel] Operator+= >> >>Am Montag, 18. April 2005 19:45 schrieb Jacob (Jack) Gryn: >> >>>Just FYI, I was referring to operator+=(double) or operator+=(complex), >>>etc. (not matrix addition which you can do with Blas_Add_Mult). >> >>These are fine and you can add them as you wish. They are quite clear >>about >>what will happen and which matrices/vectors get modified. (I'm actually >>wondering why I never needed those... whatever. Simply add them.) >> >>Regards, >> >>Christian |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-04-18 22:44:18
|
Done, and committed to CVS. I tested the gmd one, I'm assuming the others will work, the only difference is the complex one where it did .r and .i separately (but in the same loop). Jack > -----Original Message----- > From: lap...@li... [mailto:lapackpp-devel- > ad...@li...] On Behalf Of Christian Stimming > Sent: April 18, 2005 4:03 PM > To: Jacob (Jack) Gryn; lap...@li... > Subject: Re: [Lapackpp-devel] Operator+= > > Am Montag, 18. April 2005 19:45 schrieb Jacob (Jack) Gryn: > > Just FYI, I was referring to operator+=(double) or operator+=(complex), > > etc. (not matrix addition which you can do with Blas_Add_Mult). > > These are fine and you can add them as you wish. They are quite clear > about > what will happen and which matrices/vectors get modified. (I'm actually > wondering why I never needed those... whatever. Simply add them.) > > Regards, > > Christian > > > > > > > > > Jack > > > > > > > > _____ > > > > From: lap...@li... > > [mailto:lap...@li...] On Behalf Of Jacob > > (Jack) Gryn > > Sent: April 18, 2005 1:43 PM > > To: lap...@li... > > Subject: [Lapackpp-devel] Operator+= > > > > > > > > I don't remember if I brought this up before, but is there any problem > > adding "operator+=" to the matrix/vector classes? > > > > > > > > They should not use any extra memory, as all the processing is done > within > > the existing matrices/vectors. > > > > > > > > Any thoughts? > > > > > > > > Jack > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > lapackpp-devel mailing list > lap...@li... > https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
From: Christian S. <sti...@tu...> - 2005-04-18 19:58:33
|
Am Montag, 18. April 2005 19:45 schrieb Jacob (Jack) Gryn: > Just FYI, I was referring to operator+=(double) or operator+=(complex), > etc. (not matrix addition which you can do with Blas_Add_Mult). These are fine and you can add them as you wish. They are quite clear about what will happen and which matrices/vectors get modified. (I'm actually wondering why I never needed those... whatever. Simply add them.) Regards, Christian > > > > Jack > > > > _____ > > From: lap...@li... > [mailto:lap...@li...] On Behalf Of Jacob > (Jack) Gryn > Sent: April 18, 2005 1:43 PM > To: lap...@li... > Subject: [Lapackpp-devel] Operator+= > > > > I don't remember if I brought this up before, but is there any problem > adding "operator+=" to the matrix/vector classes? > > > > They should not use any extra memory, as all the processing is done within > the existing matrices/vectors. > > > > Any thoughts? > > > > Jack |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-04-18 17:47:22
|
Just FYI, I was referring to operator+=(double) or operator+=(complex), etc. (not matrix addition which you can do with Blas_Add_Mult). Jack _____ From: lap...@li... [mailto:lap...@li...] On Behalf Of Jacob (Jack) Gryn Sent: April 18, 2005 1:43 PM To: lap...@li... Subject: [Lapackpp-devel] Operator+= I don't remember if I brought this up before, but is there any problem adding "operator+=" to the matrix/vector classes? They should not use any extra memory, as all the processing is done within the existing matrices/vectors. Any thoughts? Jack |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-04-18 17:44:53
|
I don't remember if I brought this up before, but is there any problem adding "operator+=" to the matrix/vector classes? They should not use any extra memory, as all the processing is done within the existing matrices/vectors. Any thoughts? Jack |
From: Christophe P. <pi...@tl...> - 2005-04-15 15:13:56
|
Dear Christian, Thank you for your answer. Actually, I am not really a good programmer, but I am quite familiar with the usage of matrices for solving PDE. Actually, a lot of people who are using LAPACK F77 use the banded matrix vs the full matrix for the memory efficiency. Personally, when I use it, I overwrite the original matrix since the only reason to use LU is because the solving part is really fast when compare to GMRES or other iterative methods, and the overhead of computing the factorization it is becoming negligible when the solver is used a lot. I will try to see if I can write something. I am pretty interested in solver, and I would really like to make this one working. Regards, Christophe Christian Stimming wrote: > Dear Christophe, > > Christophe Picard schrieb: > >> I am trying to use the banded matrices to solve pde, and I notice >> that the >> return arguments in the function LaBandMatFactorize are weird. > > > Absolutely. Yes, in fact *all* the *FactDouble classes are pretty much > messed up. In gfd.h, I added a comment which made this clear, but in > the other Fact headers I forgot to add such a comment (will be > included immediately). > > In other words, the design of these classes is screwed anyway. If you > found a way to make them useable, then you would be very welcome to > contribute your code to lapackpp. I will happily throw out the wrong > code and replace it by your code. > >> I mean that >> the pivot and upper/lower matrix should be save in the member B and >> not in >> the original matrix: > > > Err, yes, but the whole class needs more thought. It depends on how > you construct the LaBandFactDouble object. If you use the constructor > LaBandFactDouble(LaBanMatDouble& A), then the member variable B > references the same memory as A and the function LaBandMatFactorize > will use the matrix A in-place, regardless whether A or B is passed as > parameter. > > I'm not sure how such a factorization is used in practical numerics. > It might well be possible that you should throw away the whole > implementation of bfd.h and invent your own. On the other hand, I > recently added the class LaGenQRFactComplex in gfqrc.h and I think its > design is one possibility to efficiently model matrix factorization as > a class. Maybe you can try to copy the implementation model of gfqrc.h > into one that uses LaBandMatDouble, but that's just one possibility > out of many. > >> So my question is in bfd.h : (...) >> should it be replace with something more along this (even maybe overload >> the function) >> >> 00138 inline void LaBandMatFactorize(LaBandMatDouble &A, >> LaBandFactDouble >> &AF) >> 00139 { >> 00140 integer n = A.size(1), m = n, LDA = A.gdim(0); >> 00141 integer KL = A.subdiags(), KU = A.superdiags(), info=0; >> 00142 >> 00143 F77NAME(dgbtrf)(&m, &n, &KL, &KU, &AF.B(0,0), &LDA, >> &(AF.pivot()(0)), &info); >> >> otherwise the factorization is not saved into AF but in A, making AF.B >> useless, and the function LaLinearSolve unusable. > > > Absolutely. Throw out the old errornous function and replace it by > your own function, and I would happily include that in the official > lapackpp repository. Especially if you have tested and verified your > implementation. > >> This is just a question/remark. > > > Thank you very much for this feedback. > > Regards, > > Christian Stimming > |
From: Christian S. <sti...@tu...> - 2005-04-15 11:18:58
|
Dear Christophe, Christophe Picard schrieb: > I am trying to use the banded matrices to solve pde, and I notice that the > return arguments in the function LaBandMatFactorize are weird. Absolutely. Yes, in fact *all* the *FactDouble classes are pretty much messed up. In gfd.h, I added a comment which made this clear, but in the other Fact headers I forgot to add such a comment (will be included immediately). In other words, the design of these classes is screwed anyway. If you found a way to make them useable, then you would be very welcome to contribute your code to lapackpp. I will happily throw out the wrong code and replace it by your code. > I mean that > the pivot and upper/lower matrix should be save in the member B and not in > the original matrix: Err, yes, but the whole class needs more thought. It depends on how you construct the LaBandFactDouble object. If you use the constructor LaBandFactDouble(LaBanMatDouble& A), then the member variable B references the same memory as A and the function LaBandMatFactorize will use the matrix A in-place, regardless whether A or B is passed as parameter. I'm not sure how such a factorization is used in practical numerics. It might well be possible that you should throw away the whole implementation of bfd.h and invent your own. On the other hand, I recently added the class LaGenQRFactComplex in gfqrc.h and I think its design is one possibility to efficiently model matrix factorization as a class. Maybe you can try to copy the implementation model of gfqrc.h into one that uses LaBandMatDouble, but that's just one possibility out of many. > So my question is in bfd.h : (...) > should it be replace with something more along this (even maybe overload > the function) > > 00138 inline void LaBandMatFactorize(LaBandMatDouble &A, LaBandFactDouble > &AF) > 00139 { > 00140 integer n = A.size(1), m = n, LDA = A.gdim(0); > 00141 integer KL = A.subdiags(), KU = A.superdiags(), info=0; > 00142 > 00143 F77NAME(dgbtrf)(&m, &n, &KL, &KU, &AF.B(0,0), &LDA, > &(AF.pivot()(0)), &info); > > otherwise the factorization is not saved into AF but in A, making AF.B > useless, and the function LaLinearSolve unusable. Absolutely. Throw out the old errornous function and replace it by your own function, and I would happily include that in the official lapackpp repository. Especially if you have tested and verified your implementation. > This is just a question/remark. Thank you very much for this feedback. Regards, Christian Stimming |
From: Christophe P. <pi...@tl...> - 2005-04-14 18:18:12
|
Hello, I am trying to use the banded matrices to solve pde, and I notice that the return arguments in the function LaBandMatFactorize are weird. I mean that the pivot and upper/lower matrix should be save in the member B and not in the original matrix: So my question is in bfd.h : 00138 inline void LaBandMatFactorize(LaBandMatDouble &A, LaBandFactDouble &AF) 00139 { 00140 integer n = A.size(1), m = n, LDA = A.gdim(0); 00141 integer KL = A.subdiags(), KU = A.superdiags(), info=0; 00142 00143 F77NAME(dgbtrf)(&m, &n, &KL, &KU, &A(0,0), &LDA, &(AF.pivot()(0)), &info); should it be replace with something more along this (even maybe overload the function) 00138 inline void LaBandMatFactorize(LaBandMatDouble &A, LaBandFactDouble &AF) 00139 { 00140 integer n = A.size(1), m = n, LDA = A.gdim(0); 00141 integer KL = A.subdiags(), KU = A.superdiags(), info=0; 00142 00143 F77NAME(dgbtrf)(&m, &n, &KL, &KU, &AF.B(0,0), &LDA, &(AF.pivot()(0)), &info); otherwise the factorization is not saved into AF but in A, making AF.B useless, and the function LaLinearSolve unusable. This is just a question/remark. Thanks Christophe |
From: Marc G. <ma...@co...> - 2005-04-07 00:33:30
|
I am testing as well as desperately needing a C++ algorithm for determining eigen vectors and values for a generalized real, symmetrical matrix where eigenvectors(A,B) gives the generalized eigenvectors of Awith respect to B. Has anyone out there done this? Could you please set up an example for me? I really appreciate it. I've attached a text file which contains my A and B matrices. Marc |
From: Christian S. <sti...@tu...> - 2005-04-06 20:22:36
|
Hi, well, I'm glad to hear that this worked. Please ask further questions about lapackpp only on the mailing list lapackpp-devel, never only to single developers. Am Mittwoch, 6. April 2005 22:16 schrieb Marc Gianzero: > Christian, > > Don't know if you are interested, but it appears as thought the Mac OS > X environment is a super platform for developing code using your > lapack++ code. I was successful with compiling the source code on my > Mac OS X (Panther V 10.3.8) by linking the blas and lapack libraries to > the Apple Developer's framework VectorLib libraries. The config > options I chose were: > > ./configure FLIBS="-L/sw/lib -lfrtbegin -lg2c -lSystem" > --with-blas='-framework vecLib' --with-lapack --with=aqua Sure, I'll add this to the documentation. > I think I did this right. Now all I need to do is test it. If you are > willing to work with me, then perhaps you can help me test some code. > I must confess that I am struggling to find any substantial > documentation for lapack++ v2.20. I am not sure how what function > calls are available and what the syntax is to call them. "make srcdoc", if doxygen is installed. A quite recent copy is on http://lapackpp.sourceforge.net . As explained there, no further current documentation exists. > If you wouldn't mind, I need help to solve an eigenvector problem of > the form generalized, symmetrical, real. Could you show me how I could > do this with an example piece of code? I think someone else on this mailing list has recently done this... Jack? Anyone? Christian > > Thanks - > > Marc > > On Apr 6, 2005, at 2:06 AM, Christian Stimming wrote: > > Dear Marc, > > > > thanks for asking about lapack++. > > > > Marc Gianzero schrieb: > >> I noticed you were deeply involved in working on LaPack++. I hope > >> you wouldn't mind answering my question if you can. > > > > Unfortunately I have never used OS X. Maybe someone on the > > lapackpp-devel mailing list can help here? > > > >> I wanted to install Lapack++ on my Macintosh (OS X, Panther 10.3.8). > >> I found the source on soureforge.net and downloaded and compiled > >> using the suggested "FLIBS="-L/sw/lib -lfrtbegin -lg2c -lSystem" > >> option for the .configure file. However, it now asks for BLAS and > >> LAPACK libraries to link to in the statement: > >> configure: error: Blas/Lapack was not found. > >> *** This means Lapack++ and matrix support cannot be compiled. > >> *** This makes this library unusable. Please get blas and lapack > >> *** installed. If you do have these installed, use the options > >> *** --with-blas=<libname> or --with-lapack=<libname> and/or set > >> *** the env variable LDFLAGS to include the appropriate linker > >> *** flags. > >> But I thought BLAS and Lapack were part of the Vector framework > >> system, under Apple developer. > >> Any ideas how to get lapack++ up on OS X correctly? If not, could > >> you point me to someone/place that can? > > > > Well, you obviously have to get the LAPACK (without ++) library from > > somewhere. Usually it is clearly noted when lapack belongs to some > > other package, so I suppose it doesn't belong to that "Vector > > framework". You can also try to install LAPACK yourself > > http://www.netlib.org/lapack/ > > > > Regards, > > > > Christian |
From: Christian S. <sti...@tu...> - 2005-04-06 15:05:27
|
Hi Jack, yes, please test this as well. Oh, I just see that LaLUInverseIP doesn't check the size of the work variable and the documentation doesn't say anything about the expected size of the work variable. Maybe you could add code like this: // Check for minimum work size if ( work.size() < A.size(0) ) { int nb = LaEnvBlockSize("DGETRI", A); long int W = N*nb; work.resize(W); } Regards, Christian Jacob (Jack) Gryn schrieb: > I have tested them all except for the LUInverse with the external work > variable. If you like, I'll test that one too. > > Jack > > >>-----Original Message----- >>From: Christian Stimming [mailto:sti...@tu...] >>Sent: Wednesday, April 06, 2005 6:04 AM >>To: Jacob (Jack) Gryn >>Cc: lap...@li... >>Subject: Re: [Lapackpp-devel] Updates >> >>Dear Jack, >> >>yes, I've seen your code. It looks quite good and I didn't have any >>further questions (otherwise I'd have replied earlier). Have you >>verified the new functions that they really do what they are supposed >>to? If yes, then it's good. If you want to, I could make a 2.2.1 release >>with your functions added, but otherwise I'd wait until May for the next >>release. >> >>Regards, >> >>Christian >> >>Jacob (Jack) Gryn schrieb: >> >> >>>Incase you haven't noticed, I've updated the CVS with the following >>>additions: >>> >>> >>> >>>- Addition of LUInverse and Addition of LaEigSolve for >>>general-purpose matrices. >>> >>> >>> >>>There are two versions of the Inverse; one of which, you can re-use the >>>'work' variable. >>> >>> >>> >>>There are two versions of the LaEigSolve; the input is a square >>>LaGenMatDouble - since the output may have complex eigenvalues, one >>>version has output with an LaVectorComplex eigenvalue set, and the other >>>version has two LaVectorDouble eigenvalue sets (one real and one >>>imaginary; in case you only need one or wish to deal with real and >>>imaginary data separately, it won't allocate more memory). Both >>>versions give a LaGenMatDouble matrix for the set of eigenvectors. >>> >>> >>> >>>Jack >>> > > > > > |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-04-06 14:18:20
|
I have tested them all except for the LUInverse with the external work variable. If you like, I'll test that one too. Jack > -----Original Message----- > From: Christian Stimming [mailto:sti...@tu...] > Sent: Wednesday, April 06, 2005 6:04 AM > To: Jacob (Jack) Gryn > Cc: lap...@li... > Subject: Re: [Lapackpp-devel] Updates > > Dear Jack, > > yes, I've seen your code. It looks quite good and I didn't have any > further questions (otherwise I'd have replied earlier). Have you > verified the new functions that they really do what they are supposed > to? If yes, then it's good. If you want to, I could make a 2.2.1 release > with your functions added, but otherwise I'd wait until May for the next > release. > > Regards, > > Christian > > Jacob (Jack) Gryn schrieb: > > > Incase you haven't noticed, I've updated the CVS with the following > > additions: > > > > > > > > - Addition of LUInverse and Addition of LaEigSolve for > > general-purpose matrices. > > > > > > > > There are two versions of the Inverse; one of which, you can re-use the > > 'work' variable. > > > > > > > > There are two versions of the LaEigSolve; the input is a square > > LaGenMatDouble - since the output may have complex eigenvalues, one > > version has output with an LaVectorComplex eigenvalue set, and the other > > version has two LaVectorDouble eigenvalue sets (one real and one > > imaginary; in case you only need one or wish to deal with real and > > imaginary data separately, it won't allocate more memory). Both > > versions give a LaGenMatDouble matrix for the set of eigenvectors. > > > > > > > > Jack > > |
From: Christian S. <sti...@tu...> - 2005-04-06 10:04:09
|
Dear Jack, yes, I've seen your code. It looks quite good and I didn't have any=20 further questions (otherwise I'd have replied earlier). Have you=20 verified the new functions that they really do what they are supposed=20 to? If yes, then it's good. If you want to, I could make a 2.2.1 release=20 with your functions added, but otherwise I'd wait until May for the next=20 release. Regards, Christian Jacob (Jack) Gryn schrieb: > Incase you haven=92t noticed, I=92ve updated the CVS with the following= =20 > additions: >=20 > =20 >=20 > - Addition of LUInverse and Addition of LaEigSolve for=20 > general-purpose matrices. >=20 > =20 >=20 > There are two versions of the Inverse; one of which, you can re-use the= =20 > =91work=92 variable. >=20 > =20 >=20 > There are two versions of the LaEigSolve; the input is a square=20 > LaGenMatDouble =96 since the output may have complex eigenvalues, one=20 > version has output with an LaVectorComplex eigenvalue set, and the othe= r=20 > version has two LaVectorDouble eigenvalue sets (one real and one=20 > imaginary; in case you only need one or wish to deal with real and=20 > imaginary data separately, it won=92t allocate more memory). Both=20 > versions give a LaGenMatDouble matrix for the set of eigenvectors. >=20 > =20 >=20 > Jack >=20 |
From: Christian S. <sti...@tu...> - 2005-04-06 09:07:10
|
Dear Marc, thanks for asking about lapack++. Marc Gianzero schrieb: > I noticed you were deeply involved in working on LaPack++. I hope you > wouldn't mind answering my question if you can. Unfortunately I have never used OS X. Maybe someone on the lapackpp-devel mailing list can help here? > I wanted to install Lapack++ on my Macintosh (OS X, Panther 10.3.8). I > found the source on soureforge.net and downloaded and compiled using the > suggested "FLIBS="-L/sw/lib -lfrtbegin -lg2c -lSystem" option for the > .configure file. However, it now asks for BLAS and LAPACK libraries to > link to in the statement: > > configure: error: Blas/Lapack was not found. > *** This means Lapack++ and matrix support cannot be compiled. > *** This makes this library unusable. Please get blas and lapack > *** installed. If you do have these installed, use the options > *** --with-blas=<libname> or --with-lapack=<libname> and/or set > *** the env variable LDFLAGS to include the appropriate linker > *** flags. > > But I thought BLAS and Lapack were part of the Vector framework system, > under Apple developer. > > Any ideas how to get lapack++ up on OS X correctly? If not, could you > point me to someone/place that can? Well, you obviously have to get the LAPACK (without ++) library from somewhere. Usually it is clearly noted when lapack belongs to some other package, so I suppose it doesn't belong to that "Vector framework". You can also try to install LAPACK yourself http://www.netlib.org/lapack/ Regards, Christian |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-04-05 21:40:17
|
Incase you haven't noticed, I've updated the CVS with the following additions: - Addition of LUInverse and Addition of LaEigSolve for general-purpose matrices. There are two versions of the Inverse; one of which, you can re-use the 'work' variable. There are two versions of the LaEigSolve; the input is a square LaGenMatDouble - since the output may have complex eigenvalues, one version has output with an LaVectorComplex eigenvalue set, and the other version has two LaVectorDouble eigenvalue sets (one real and one imaginary; in case you only need one or wish to deal with real and imaginary data separately, it won't allocate more memory). Both versions give a LaGenMatDouble matrix for the set of eigenvectors. Jack |
From: Christian S. <sti...@tu...> - 2005-03-31 08:42:46
|
Jacob (Jack) Gryn schrieb: > Do you have any recommendations on how to convert the 2 real-values > matricies into a single complex-valued one? I can make a for-loop to d= o > this, but I'm not sure if this is considered efficient for lapackpp's > purposes. No, I don't have any further ideas besides the for-loop. In such an=20 exceptional case an extra for loop in lapackpp will be fine. I don't understand why DGEEV doesn't write the eigenvalues into a=20 complex vector in the first place, but maybe that's because they don't=20 want to mix and match different data types. Whatever. In Lapackpp, both=20 real and complex matrices are available, so we return the eigenvalues in=20 complex-valued. Note, however, that this wouldn't work if someone were=20 to use lapackpp without the LA_COMPLEX_SUPPORT enabled. But that's okay,=20 IMHO. Christian PS: Sorry for not answering to your message=20 http://sourceforge.net/mailarchive/message.php?msg_id=3D11268095 -- I jus= t=20 discovered that message waiting in my lapackpp-devel inbox unanswered.=20 Fortunately you asked the question again and it's resolved now -- I hope=20 I don't overlook messages again. >=20 >=20 >>-----Original Message----- >>From: Christian Stimming [mailto:sti...@tu...] >>Sent: March 30, 2005 3:48 PM >>To: Jacob (Jack) Gryn >>Cc: lap...@li... >>Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and Eigenvecto= rs >> >>Am Mittwoch, 30. M=E4rz 2005 22:40 schrieb Jacob \(Jack\) Gryn: >> >>>With respect to void LaEigSolve, DGEEV/DGEEVX seem to return >>>eigenvalue/eigenvector pairs with the possibility that the eigenvalues >>>might be complex numbers. >> >>Right, that's what happens with general (non-symmetric) matrices. >> >> >>>Maybe it would be better to implement it using dgesdd? This may need = a >>>transpose of the V matrix though, which is probably bad. >> >>No, SVD is something different. >> >> >>>Alternatively, I could have it return a LaGenMatComplex for the >> >>eigenvector >> >>>component. >>> >>>Any thoughts on this? >> >>Yes, returning a LaGenMatComplex is probably better. However, the >>conversion >>from two real-valued vectors to one complex-valued will have to be done= in >>Lapackpp. >> >>Christian >> >> >>>Jack >>> >>> >>>>-----Original Message----- >>>>From: Jacob (Jack) Gryn [mailto:jg...@cs...] >>>>Sent: March 30, 2005 12:37 PM >>>>To: 'Christian Stimming' >>>>Cc: 'lap...@li...' >>>>Subject: RE: [Lapackpp-devel] Pseudoinverse, Eigenvalues and >> >>Eigenvectors >> >>>>I'll work on LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) >> >>and >> >>>>LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV, LaVectorDouble >>>>&work) >>>> >>>>How about adding >>>>LaInverse(const LaGenMatDouble &A, LaGenMatDouble &invA) >>>>LaInverseIP(LaGenMatDouble &A) >>>> >>>>For those who want to do it all in one shot? >>>> >>>>I'll also work on >>>> >>>>void LaEigSolve(const LaGenMatDouble &A, LaVectorDouble &eigvals, >>>>LaGenMatDouble &eigvec) >>>> >>>>Jack >>>> >>>> >>>>>-----Original Message----- >>>>>From: lap...@li... >>>>>[mailto:lapackpp-devel- ad...@li...] On Behalf Of >>>>>Christian Stimming >>>>>Sent: March 30, 2005 3:55 AM >>>>>To: Jacob (Jack) Gryn >>>>>Cc: lap...@li... >>>>>Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and >>>> >>>>Eigenvectors >>>> >>>> >>>>>Hi, >>>>> >>>>>Jacob (Jack) Gryn schrieb: >>>>> >>>>>>Apparently Lapack has a function named DGETRI for obtaining the >>>> >>>>inverse >>>> >>>> >>>>>of a >>>>> >>>>> >>>>>>function. Would it be reasonable to create a function called >> >>inv() >> >>>>>>in LaGenMatDouble that makes use of DGETRI? (If not somewhere >>>>>>else?). >>>>> >>>>>Ok, I didn't know of DGETRI. Yes, that's reasonable to be >> >>implemented. >> >>>>>However, the input to DGETRI is not any general matrix but the input >> >>is >> >>>>>the result of the DGETRF operation, which in lapack++ is available >> >>e.g. >> >>>>>in LUFactorizeIP in linslv.cc and laslv.h (huh, obviously I forgot >> >>to >> >>>>>add the leading "La" to the name; sorry). >>>>> >>>>>Therefore I would suggest to add a function like >>>>>LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) to >>>>>laslv.h/linslv.cc, where the documentation clearly says that the >> >>input >> >>>>>variables *must* come from LUFactorizeIP. I think that's better than >>>>>adding an inv() method to LaGenMatDouble, because that one would >> >>always >> >>>>>have to compute both dgetrf and dgetri but the fact that these are >> >>two >> >>>>>separate steps should be visible to lapackpp's users, IMHO. >>>>> >>>>>You can decide for yourself what to do with the WORK array of DGETRI >> >>-- >> >>>>>if this function is called several times, then you can gain a >>>>>significant speedup if the application re-uses the same WORK array >>>>>instead of allocating a new one each time (really!). Probably you >> >>can >> >>>>>add two functions for the Inverse, one with the WORK array as >> >>argument >> >>>>>and the other one without, where the latter internally allocates a >> >>new >> >>>>>WORK array and then calls the former function. >>>>> >>>>> >>>>>>In addition, I would like to build a pseudoinverse function as >> >>well >> >>>>(not >>>> >>>> >>>>>>implemented in lapack). The two options would be to do it >> >>according >> >>>>to >>>> >>>> >>>>>the >>>>> >>>>> >>>>>>definition pinv(A)=3Dinv(A'A)A', or to do the following (sorry for >> >>the >> >>>>>matlab >>>>> >>>>> >>>>>>style notation) [U,D,V']=3Dsvd(A). pinv(A)=3DVpinv(D)U', where to = get >>>>> >>>>>pinv(D), >>>>> >>>>> >>>>>>just find the inverse of each non-zero value in the diagonal >> >>matrix. >> >>>>>>The second way apparently is more computationally efficient, but a >>>> >>>>loop >>>> >>>> >>>>>with >>>>> >>>>> >>>>>>if statements to get the pseudoinverse of D may take longer than >>>>>>using lapack to calculate the pseudoinverse by definition. >>>>> >>>>>I'm still quite hesistant to add functions concerning "the inverse" >> >>to >> >>>>>a numerical library (the exception being cases like above, when >> >>there >> >>>>>already exists a LAPACK function for this). Really, all the numerics >>>>>guy's faces turn red and they start sweating once you tell them you >>>>>want to calculate the inverse :-)) >>>>>http://www.netlib.org/lapack/lug/node26.html#1232 >>>>> >>>>>But seriously, I found for myself that the definition of "the useful >>>>>pseodo-inverse" depends much more on your actual application than >> >>you >> >>>>>might think, which means the calculation of the pseudoinverse should >> >>be >> >>>>>done in application code instead of lapackpp code. >>>>> >>>>>The SVD way of calculating it is a good example, because when you >> >>want >> >>>>>to calculate pinv(D) by inverting "each non-zero value", you have to >>>>>think twice what "non-zero" actually means. Does it mean "exactly >> >>equal >> >>>>>to the floating point number 0.0"? Probably not, but rather "smaller >> >>in >> >>>>>absolute value than some value epsilon". If epsilon is the machine's >>>>>precision, then it's approx. eps=3D2e-16 for doubles. But for your >>>>>application it might turn out that a much better value is something >>>>>smaller err larger, e.g. 1e-8 or 1e-4. But the whole point is that >> >>this >> >>>>>really really really depends on your application. Also on >>>>>http://www.netlib.org/lapack/lug/node53.html it says that there is >> >>no >> >>>>>explicit LAPACK function for the pseudoinverse but "The effective >> >>rank, >> >>>>>k, of A can be determined as the number of singular values which >> >>exceed >> >>>>>a suitable threshold." So I would ask you not to implement such a >>>>>function in lapackpp. >>>>> >>>>> >>>>>>Secondly, if I want to obtain the eigenvalues and eigenvectors of >> >>a >> >>>>>general >>>>> >>>>> >>>>>>matrix, is DGEEV the correct function to implement? Should I >> >>create >> >>>>>>a function to add to EigSlv.cc ? >>>>> >>>>>Yes, and yes. (Or maybe dgeevx?) For dgeev/dgeevx (whichever seems >>>>>easier for you) you can add a function right away. Just go ahead. >>>>> >>>>>Thanks, >>>>> >>>>>Christian >>>>> >>>>> >>>>> >>>>> >>>>>------------------------------------------------------- >>>>>SF email is sponsored by - The IT Product Guide >>>>>Read honest & candid reviews on hundreds of IT Products from real >>>>>users. Discover which products truly live up to the hype. Start >> >>reading >> >>>>>now. http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick >>>>>_______________________________________________ >>>>>lapackpp-devel mailing list >>>>>lap...@li... >>>>>https://lists.sourceforge.net/lists/listinfo/lapackpp-devel >=20 >=20 >=20 >=20 >=20 |
From: Jacob \(Jack\) G. <jg...@cs...> - 2005-03-30 21:15:49
|
Do you have any recommendations on how to convert the 2 real-values matricies into a single complex-valued one? I can make a for-loop to do this, but I'm not sure if this is considered efficient for lapackpp's purposes. > -----Original Message----- > From: Christian Stimming [mailto:sti...@tu...] > Sent: March 30, 2005 3:48 PM > To: Jacob (Jack) Gryn > Cc: lap...@li... > Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and = Eigenvectors >=20 > Am Mittwoch, 30. M=E4rz 2005 22:40 schrieb Jacob \(Jack\) Gryn: > > With respect to void LaEigSolve, DGEEV/DGEEVX seem to return > > eigenvalue/eigenvector pairs with the possibility that the = eigenvalues > > might be complex numbers. >=20 > Right, that's what happens with general (non-symmetric) matrices. >=20 > > Maybe it would be better to implement it using dgesdd? This may = need a > > transpose of the V matrix though, which is probably bad. >=20 > No, SVD is something different. >=20 > > Alternatively, I could have it return a LaGenMatComplex for the > eigenvector > > component. > > > > Any thoughts on this? >=20 > Yes, returning a LaGenMatComplex is probably better. However, the > conversion > from two real-valued vectors to one complex-valued will have to be = done in > Lapackpp. >=20 > Christian >=20 > > > > Jack > > > > > -----Original Message----- > > > From: Jacob (Jack) Gryn [mailto:jg...@cs...] > > > Sent: March 30, 2005 12:37 PM > > > To: 'Christian Stimming' > > > Cc: 'lap...@li...' > > > Subject: RE: [Lapackpp-devel] Pseudoinverse, Eigenvalues and > Eigenvectors > > > > > > I'll work on LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt = &PIV) > and > > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV, = LaVectorDouble > > > &work) > > > > > > How about adding > > > LaInverse(const LaGenMatDouble &A, LaGenMatDouble &invA) > > > LaInverseIP(LaGenMatDouble &A) > > > > > > For those who want to do it all in one shot? > > > > > > I'll also work on > > > > > > void LaEigSolve(const LaGenMatDouble &A, LaVectorDouble &eigvals, > > > LaGenMatDouble &eigvec) > > > > > > Jack > > > > > > > -----Original Message----- > > > > From: lap...@li... > > > > [mailto:lapackpp-devel- ad...@li...] On Behalf = Of > > > > Christian Stimming > > > > Sent: March 30, 2005 3:55 AM > > > > To: Jacob (Jack) Gryn > > > > Cc: lap...@li... > > > > Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and > > > > > > Eigenvectors > > > > > > > Hi, > > > > > > > > Jacob (Jack) Gryn schrieb: > > > > > Apparently Lapack has a function named DGETRI for obtaining = the > > > > > > inverse > > > > > > > of a > > > > > > > > > function. Would it be reasonable to create a function called > inv() > > > > > in LaGenMatDouble that makes use of DGETRI? (If not somewhere > > > > > else?). > > > > > > > > Ok, I didn't know of DGETRI. Yes, that's reasonable to be > implemented. > > > > However, the input to DGETRI is not any general matrix but the = input > is > > > > the result of the DGETRF operation, which in lapack++ is = available > e.g. > > > > in LUFactorizeIP in linslv.cc and laslv.h (huh, obviously I = forgot > to > > > > add the leading "La" to the name; sorry). > > > > > > > > Therefore I would suggest to add a function like > > > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) to > > > > laslv.h/linslv.cc, where the documentation clearly says that the > input > > > > variables *must* come from LUFactorizeIP. I think that's better = than > > > > adding an inv() method to LaGenMatDouble, because that one would > always > > > > have to compute both dgetrf and dgetri but the fact that these = are > two > > > > separate steps should be visible to lapackpp's users, IMHO. > > > > > > > > You can decide for yourself what to do with the WORK array of = DGETRI > -- > > > > if this function is called several times, then you can gain a > > > > significant speedup if the application re-uses the same WORK = array > > > > instead of allocating a new one each time (really!). Probably = you > can > > > > add two functions for the Inverse, one with the WORK array as > argument > > > > and the other one without, where the latter internally allocates = a > new > > > > WORK array and then calls the former function. > > > > > > > > > In addition, I would like to build a pseudoinverse function as > well > > > > > > (not > > > > > > > > implemented in lapack). The two options would be to do it > according > > > > > > to > > > > > > > the > > > > > > > > > definition pinv(A)=3Dinv(A'A)A', or to do the following (sorry = for > the > > > > > > > > matlab > > > > > > > > > style notation) [U,D,V']=3Dsvd(A). pinv(A)=3DVpinv(D)U', = where to get > > > > > > > > pinv(D), > > > > > > > > > just find the inverse of each non-zero value in the diagonal > matrix. > > > > > > > > > > The second way apparently is more computationally efficient, = but a > > > > > > loop > > > > > > > with > > > > > > > > > if statements to get the pseudoinverse of D may take longer = than > > > > > using lapack to calculate the pseudoinverse by definition. > > > > > > > > I'm still quite hesistant to add functions concerning "the = inverse" > to > > > > a numerical library (the exception being cases like above, when > there > > > > already exists a LAPACK function for this). Really, all the = numerics > > > > guy's faces turn red and they start sweating once you tell them = you > > > > want to calculate the inverse :-)) > > > > http://www.netlib.org/lapack/lug/node26.html#1232 > > > > > > > > But seriously, I found for myself that the definition of "the = useful > > > > pseodo-inverse" depends much more on your actual application = than > you > > > > might think, which means the calculation of the pseudoinverse = should > be > > > > done in application code instead of lapackpp code. > > > > > > > > The SVD way of calculating it is a good example, because when = you > want > > > > to calculate pinv(D) by inverting "each non-zero value", you = have to > > > > think twice what "non-zero" actually means. Does it mean = "exactly > equal > > > > to the floating point number 0.0"? Probably not, but rather = "smaller > in > > > > absolute value than some value epsilon". If epsilon is the = machine's > > > > precision, then it's approx. eps=3D2e-16 for doubles. But for = your > > > > application it might turn out that a much better value is = something > > > > smaller err larger, e.g. 1e-8 or 1e-4. But the whole point is = that > this > > > > really really really depends on your application. Also on > > > > http://www.netlib.org/lapack/lug/node53.html it says that there = is > no > > > > explicit LAPACK function for the pseudoinverse but "The = effective > rank, > > > > k, of A can be determined as the number of singular values which > exceed > > > > a suitable threshold." So I would ask you not to implement such = a > > > > function in lapackpp. > > > > > > > > > Secondly, if I want to obtain the eigenvalues and eigenvectors = of > a > > > > > > > > general > > > > > > > > > matrix, is DGEEV the correct function to implement? Should I > create > > > > > a function to add to EigSlv.cc ? > > > > > > > > Yes, and yes. (Or maybe dgeevx?) For dgeev/dgeevx (whichever = seems > > > > easier for you) you can add a function right away. Just go = ahead. > > > > > > > > Thanks, > > > > > > > > Christian > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > > SF email is sponsored by - The IT Product Guide > > > > Read honest & candid reviews on hundreds of IT Products from = real > > > > users. Discover which products truly live up to the hype. Start > reading > > > > now. = http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick > > > > _______________________________________________ > > > > lapackpp-devel mailing list > > > > lap...@li... > > > > https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
From: Christian S. <sti...@tu...> - 2005-03-30 20:48:17
|
Am Mittwoch, 30. M=E4rz 2005 22:26 schrieb Jacob \(Jack\) Gryn: > I guess my vision is to ultimately have something at a more abstracted > level, like matlab, where you could write, in C, something like C=3Dinv(A= *B). Ok, but that's not for lapackpp. Some of the functions in Matlab, like the = one=20 you mentioned, are probably horribly inefficient/unstable from a numerical= =20 point of view. > With respect to the inverse functions, I don't see why 2 lapack functions > can't be combined to make life simpler for people when they have the choi= ce > not to use it. In my particular application, it's not run very often > (maybe once in the entire program), so I would rather make my code easier > to read in such a case. > > Ultimately, I'll leave the decision up to you, if you feel it's > inappropriate, I'll just stick the function in my own local library > instead, along with the pseudoinverse. I stick to my decision that there shouldn't be additional inverse functions= in=20 lapackpp if they aren't in the fortran lapack. Christian > > Jack > > > -----Original Message----- > > From: Christian Stimming [mailto:sti...@tu...] > > Sent: March 30, 2005 3:01 PM > > To: Jacob (Jack) Gryn > > Cc: lap...@li... > > Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and Eigenvecto= rs > > > > Hi, > > > > Am Mittwoch, 30. M=E4rz 2005 19:37 schrieb Jacob (Jack) Gryn: > > > I'll work on LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) a= nd > > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV, LaVectorDouble > > > &work) > > > > Good. > > > > > How about adding > > > LaInverse(const LaGenMatDouble &A, LaGenMatDouble &invA) > > > LaInverseIP(LaGenMatDouble &A) > > > For those who want to do it all in one shot? > > > > Sigh... do you desperately need these? I would rather prefer not to off= er > > these as extra functions, because it's just totally against the "spirit" > > of a > > numerically efficient library. In 95% of the cases people are better of > > by using the solver functions... and already the existence of the inver= se > > functions will distract from that fact. > > > > > I'll also work on > > > > > > void LaEigSolve(const LaGenMatDouble &A, LaVectorDouble &eigvals, > > > LaGenMatDouble &eigvec) > > > > That's good. I'm looking forward to your contributions. > > > > Christian > > > > > Jack > > > > > > > -----Original Message----- > > > > From: lap...@li... [mailto:lapackpp- > > > > devel- > > > > > > ad...@li...] On Behalf Of Christian Stimming > > > > Sent: March 30, 2005 3:55 AM > > > > To: Jacob (Jack) Gryn > > > > Cc: lap...@li... > > > > Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and > > > > Eigenvectors > > > > > > Hi, > > > > > > > > Jacob (Jack) Gryn schrieb: > > > > > Apparently Lapack has a function named DGETRI for obtaining the > > > > inverse > > > > > > of a > > > > > > > > > function. Would it be reasonable to create a function called inv= () > > > > in > > > > > > > LaGenMatDouble that makes use of DGETRI? (If not somewhere else?= ). > > > > > > > > Ok, I didn't know of DGETRI. Yes, that's reasonable to be > > > > implemented. However, the input to DGETRI is not any general matrix > > > > but the input > > > > is > > > > > > the result of the DGETRF operation, which in lapack++ is available > > > > e.g. > > > > > > in LUFactorizeIP in linslv.cc and laslv.h (huh, obviously I forgot = to > > > > add the leading "La" to the name; sorry). > > > > > > > > Therefore I would suggest to add a function like > > > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) to > > > > laslv.h/linslv.cc, where the documentation clearly says that the > > > > input variables *must* come from LUFactorizeIP. I think that's bett= er > > > > than adding an inv() method to LaGenMatDouble, because that one wou= ld > > > > always > > > > > > have to compute both dgetrf and dgetri but the fact that these are > > > > two separate steps should be visible to lapackpp's users, IMHO. > > > > > > > > You can decide for yourself what to do with the WORK array of DGETRI > > > > - > > > > - > > > > > > if this function is called several times, then you can gain a > > > > significant speedup if the application re-uses the same WORK array > > > > instead of allocating a new one each time (really!). Probably you c= an > > > > add two functions for the Inverse, one with the WORK array as > > > > argument and the other one without, where the latter internally > > > > allocates a new WORK array and then calls the former function. > > > > > > > > > In addition, I would like to build a pseudoinverse function as we= ll > > > > > (not implemented in lapack). The two options would be to do it > > > > > according to > > > > > > > > the > > > > > > > > > definition pinv(A)=3Dinv(A'A)A', or to do the following (sorry for > > > > > the > > > > > > > > matlab > > > > > > > > > style notation) [U,D,V']=3Dsvd(A). pinv(A)=3DVpinv(D)U', where t= o get > > > > > > > > pinv(D), > > > > > > > > > just find the inverse of each non-zero value in the diagonal > > > > > matrix. > > > > > > > > > > The second way apparently is more computationally efficient, but a > > > > loop > > > > > > with > > > > > > > > > if statements to get the pseudoinverse of D may take longer than > > > > using > > > > > > > lapack to calculate the pseudoinverse by definition. > > > > > > > > I'm still quite hesistant to add functions concerning "the inverse" > > > > to > > > > a > > > > > > numerical library (the exception being cases like above, when there > > > > already exists a LAPACK function for this). Really, all the numerics > > > > guy's faces turn red and they start sweating once you tell them you > > > > want > > > > > > to calculate the inverse :-)) > > > > http://www.netlib.org/lapack/lug/node26.html#1232 > > > > > > > > But seriously, I found for myself that the definition of "the useful > > > > pseodo-inverse" depends much more on your actual application than y= ou > > > > might think, which means the calculation of the pseudoinverse should > > > > be > > > > > > done in application code instead of lapackpp code. > > > > > > > > The SVD way of calculating it is a good example, because when you > > > > want to calculate pinv(D) by inverting "each non-zero value", you > > > > have to think twice what "non-zero" actually means. Does it mean > > > > "exactly > > > > equal > > > > > > to the floating point number 0.0"? Probably not, but rather "smaller > > > > in > > > > > > absolute value than some value epsilon". If epsilon is the machine's > > > > precision, then it's approx. eps=3D2e-16 for doubles. But for your > > > > application it might turn out that a much better value is something > > > > smaller err larger, e.g. 1e-8 or 1e-4. But the whole point is that > > > > this > > > > > > really really really depends on your application. Also on > > > > http://www.netlib.org/lapack/lug/node53.html it says that there is = no > > > > explicit LAPACK function for the pseudoinverse but "The effective > > > > rank, > > > > > > k, of A can be determined as the number of singular values which > > > > exceed > > > > > > a suitable threshold." So I would ask you not to implement such a > > > > function in lapackpp. > > > > > > > > > Secondly, if I want to obtain the eigenvalues and eigenvectors of= a > > > > > > > > general > > > > > > > > > matrix, is DGEEV the correct function to implement? Should I > > > > > create > > > > a > > > > > > > function to add to EigSlv.cc ? > > > > > > > > Yes, and yes. (Or maybe dgeevx?) For dgeev/dgeevx (whichever seems > > > > easier for you) you can add a function right away. Just go ahead. > > > > > > > > Thanks, > > > > > > > > Christian > > > > > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > > SF email is sponsored by - The IT Product Guide > > > > Read honest & candid reviews on hundreds of IT Products from real > > > > users. > > > > > > Discover which products truly live up to the hype. Start reading no= w. > > > > http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick > > > > _______________________________________________ > > > > lapackpp-devel mailing list > > > > lap...@li... > > > > https://lists.sourceforge.net/lists/listinfo/lapackpp-devel > > > > > > ------------------------------------------------------- > > > SF email is sponsored by - The IT Product Guide > > > Read honest & candid reviews on hundreds of IT Products from real > > > users. Discover which products truly live up to the hype. Start readi= ng > > > now. http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick > > > _______________________________________________ > > > lapackpp-devel mailing list > > > lap...@li... > > > https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
From: Christian S. <sti...@tu...> - 2005-03-30 20:47:06
|
Am Mittwoch, 30. M=E4rz 2005 22:40 schrieb Jacob \(Jack\) Gryn: > With respect to void LaEigSolve, DGEEV/DGEEVX seem to return > eigenvalue/eigenvector pairs with the possibility that the eigenvalues > might be complex numbers. Right, that's what happens with general (non-symmetric) matrices. > Maybe it would be better to implement it using dgesdd? This may need a > transpose of the V matrix though, which is probably bad. No, SVD is something different. > Alternatively, I could have it return a LaGenMatComplex for the eigenvect= or > component. > > Any thoughts on this? Yes, returning a LaGenMatComplex is probably better. However, the conversio= n=20 from two real-valued vectors to one complex-valued will have to be done in= =20 Lapackpp. Christian > > Jack > > > -----Original Message----- > > From: Jacob (Jack) Gryn [mailto:jg...@cs...] > > Sent: March 30, 2005 12:37 PM > > To: 'Christian Stimming' > > Cc: 'lap...@li...' > > Subject: RE: [Lapackpp-devel] Pseudoinverse, Eigenvalues and Eigenvecto= rs > > > > I'll work on LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) and > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV, LaVectorDouble > > &work) > > > > How about adding > > LaInverse(const LaGenMatDouble &A, LaGenMatDouble &invA) > > LaInverseIP(LaGenMatDouble &A) > > > > For those who want to do it all in one shot? > > > > I'll also work on > > > > void LaEigSolve(const LaGenMatDouble &A, LaVectorDouble &eigvals, > > LaGenMatDouble &eigvec) > > > > Jack > > > > > -----Original Message----- > > > From: lap...@li... > > > [mailto:lapackpp-devel- ad...@li...] On Behalf Of > > > Christian Stimming > > > Sent: March 30, 2005 3:55 AM > > > To: Jacob (Jack) Gryn > > > Cc: lap...@li... > > > Subject: Re: [Lapackpp-devel] Pseudoinverse, Eigenvalues and > > > > Eigenvectors > > > > > Hi, > > > > > > Jacob (Jack) Gryn schrieb: > > > > Apparently Lapack has a function named DGETRI for obtaining the > > > > inverse > > > > > of a > > > > > > > function. Would it be reasonable to create a function called inv() > > > > in LaGenMatDouble that makes use of DGETRI? (If not somewhere > > > > else?). > > > > > > Ok, I didn't know of DGETRI. Yes, that's reasonable to be implemented. > > > However, the input to DGETRI is not any general matrix but the input = is > > > the result of the DGETRF operation, which in lapack++ is available e.= g. > > > in LUFactorizeIP in linslv.cc and laslv.h (huh, obviously I forgot to > > > add the leading "La" to the name; sorry). > > > > > > Therefore I would suggest to add a function like > > > LaLUInverseIP(LaGenMatDouble &A, LaVectorLongInt &PIV) to > > > laslv.h/linslv.cc, where the documentation clearly says that the input > > > variables *must* come from LUFactorizeIP. I think that's better than > > > adding an inv() method to LaGenMatDouble, because that one would alwa= ys > > > have to compute both dgetrf and dgetri but the fact that these are two > > > separate steps should be visible to lapackpp's users, IMHO. > > > > > > You can decide for yourself what to do with the WORK array of DGETRI = =2D- > > > if this function is called several times, then you can gain a > > > significant speedup if the application re-uses the same WORK array > > > instead of allocating a new one each time (really!). Probably you can > > > add two functions for the Inverse, one with the WORK array as argument > > > and the other one without, where the latter internally allocates a new > > > WORK array and then calls the former function. > > > > > > > In addition, I would like to build a pseudoinverse function as well > > > > (not > > > > > > implemented in lapack). The two options would be to do it according > > > > to > > > > > the > > > > > > > definition pinv(A)=3Dinv(A'A)A', or to do the following (sorry for = the > > > > > > matlab > > > > > > > style notation) [U,D,V']=3Dsvd(A). pinv(A)=3DVpinv(D)U', where to = get > > > > > > pinv(D), > > > > > > > just find the inverse of each non-zero value in the diagonal matrix. > > > > > > > > The second way apparently is more computationally efficient, but a > > > > loop > > > > > with > > > > > > > if statements to get the pseudoinverse of D may take longer than > > > > using lapack to calculate the pseudoinverse by definition. > > > > > > I'm still quite hesistant to add functions concerning "the inverse" to > > > a numerical library (the exception being cases like above, when there > > > already exists a LAPACK function for this). Really, all the numerics > > > guy's faces turn red and they start sweating once you tell them you > > > want to calculate the inverse :-)) > > > http://www.netlib.org/lapack/lug/node26.html#1232 > > > > > > But seriously, I found for myself that the definition of "the useful > > > pseodo-inverse" depends much more on your actual application than you > > > might think, which means the calculation of the pseudoinverse should = be > > > done in application code instead of lapackpp code. > > > > > > The SVD way of calculating it is a good example, because when you want > > > to calculate pinv(D) by inverting "each non-zero value", you have to > > > think twice what "non-zero" actually means. Does it mean "exactly equ= al > > > to the floating point number 0.0"? Probably not, but rather "smaller = in > > > absolute value than some value epsilon". If epsilon is the machine's > > > precision, then it's approx. eps=3D2e-16 for doubles. But for your > > > application it might turn out that a much better value is something > > > smaller err larger, e.g. 1e-8 or 1e-4. But the whole point is that th= is > > > really really really depends on your application. Also on > > > http://www.netlib.org/lapack/lug/node53.html it says that there is no > > > explicit LAPACK function for the pseudoinverse but "The effective ran= k, > > > k, of A can be determined as the number of singular values which exce= ed > > > a suitable threshold." So I would ask you not to implement such a > > > function in lapackpp. > > > > > > > Secondly, if I want to obtain the eigenvalues and eigenvectors of a > > > > > > general > > > > > > > matrix, is DGEEV the correct function to implement? Should I create > > > > a function to add to EigSlv.cc ? > > > > > > Yes, and yes. (Or maybe dgeevx?) For dgeev/dgeevx (whichever seems > > > easier for you) you can add a function right away. Just go ahead. > > > > > > Thanks, > > > > > > Christian > > > > > > > > > > > > > > > ------------------------------------------------------- > > > SF email is sponsored by - The IT Product Guide > > > Read honest & candid reviews on hundreds of IT Products from real > > > users. Discover which products truly live up to the hype. Start readi= ng > > > now. http://ads.osdn.com/?ad_id=3D6595&alloc_id=3D14396&op=3Dclick > > > _______________________________________________ > > > lapackpp-devel mailing list > > > lap...@li... > > > https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |