lapackpp-devel Mailing List for Lapack++ (Page 16)
Status: Beta
Brought to you by:
cstim
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(19) |
Sep
(11) |
Oct
|
Nov
(4) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(2) |
Feb
(4) |
Mar
(32) |
Apr
(18) |
May
(3) |
Jun
|
Jul
(1) |
Aug
(4) |
Sep
(13) |
Oct
(5) |
Nov
|
Dec
(1) |
2006 |
Jan
|
Feb
(6) |
Mar
(2) |
Apr
(6) |
May
(18) |
Jun
(15) |
Jul
(17) |
Aug
(45) |
Sep
(3) |
Oct
(4) |
Nov
(26) |
Dec
(4) |
2007 |
Jan
(11) |
Feb
(14) |
Mar
(1) |
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(2) |
2008 |
Jan
|
Feb
(2) |
Mar
|
Apr
(4) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2010 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(7) |
Dec
|
2011 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
(2) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
|
Nov
(14) |
Dec
(1) |
2012 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
(2) |
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Christian S. <sti...@tu...> - 2004-08-10 20:51:41
|
Am Dienstag, 10. August 2004 21:38 schrieb Jacob \(Jack\) Gryn: > I tried it out again with both square and non-square matricies on both > complex and doubles: > > Results are the same for both Complex matricies and Doubles. > > With square matrices, the results are more viable. The equation AX=B, with > A and B being the same, X should return the identity matrix; however it > does not. Although through multiplying AX, the result still is B. > > With rectangular matrices, the results are the same in both complex and > doubles, but AX != B; X is usually all 0's, or very close to 0's > (10e-300). When you say AX != B, how large is the actual difference, that is D=(AX-B) and what is the Mat_Norm2(D)? If that's in fact in the region of 10e-15 or smaller, then you are only seeing some rounding effects due to the floating point representation -- I wouldn't consider this a problem. If you have matrices with numbers in normal regions, then simply consider anything below 10^-10 as zero and you're fine. Did I misunderstand something? Christian |
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-10 19:38:34
|
I tried it out again with both square and non-square matricies on both complex and doubles: Results are the same for both Complex matricies and Doubles. With square matrices, the results are more viable. The equation AX=B, with A and B being the same, X should return the identity matrix; however it does not. Although through multiplying AX, the result still is B. With rectangular matrices, the results are the same in both complex and doubles, but AX != B; X is usually all 0's, or very close to 0's (10e-300). Any thoughts? Jack -----Original Message----- From: lap...@li... [mailto:lap...@li...] On Behalf Of Jacob (Jack) Gryn Sent: August 10, 2004 9:31 AM To: 'Christian Stimming' Cc: lap...@li... Subject: RE: [Lapackpp-devel] Tested LaLinearSolve(LaGenMatDouble, LaGenMatDouble &, LaGenMatDouble) ? I'll look into it some more later today when I get a chance, keep in mind that the code runs differently for square matrices (calls the LaLULinearSolve function) vs non-square matrices (calls the LaQRLinearSolve). I remember that my resulting matrix, X, only had values in the leftmost column, and all were in the order of 10^-306 (I've never seen such small numbers in my life :) ) when matlab's \ operator gave be good results. I'll try the same thing with complex matrices to see what happens, maybe I'm doing something wrong. Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: Tuesday, August 10, 2004 4:09 AM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] Tested LaLinearSolve(LaGenMatDouble, LaGenMatDouble &, LaGenMatDouble) ? Hi, Jacob (Jack) Gryn schrieb: > ps. if you haven't noticed, I posted the row-order changes to cvs last > night. Yes, thanks a lot. This looks very nice. I have noticed and I also added some more documentation in the header. > I've been trying to solve a matrix equation Ax=b with all 3 variables, > A,x, and b, being LaGenMatDoubles. > > I've used tried the function LaLinearSolve(LaGenMatDouble, > LaGenMatDouble &, LaGenMatDouble), however, my results are clearly not > correct. (Maybe I'm doing something wrong) > > Has this been tested? I've noticed the tGenSolve.cc program does not > contain a test for the linear solve of 3 matricies. I did not test the real-valued functions at all since I only use the complex-valued matrices and functions. It may very well be that there is some kind of bug. However, I would be quite surprised if this is the case since the linear solve routines really do not contain any non-trivial operations. Could you document the problem further? I know for sure the LaLinearSolve with LaGenMatComplex works correct, either for vectors or for matrices. Can you try for X and B vectors and then for some simple matrices? Maybe you could post a simple example (3x3) here? Christian ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ lapackpp-devel mailing list lap...@li... https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-10 13:31:04
|
I'll look into it some more later today when I get a chance, keep in mind that the code runs differently for square matrices (calls the LaLULinearSolve function) vs non-square matrices (calls the LaQRLinearSolve). I remember that my resulting matrix, X, only had values in the leftmost column, and all were in the order of 10^-306 (I've never seen such small numbers in my life :) ) when matlab's \ operator gave be good results. I'll try the same thing with complex matrices to see what happens, maybe I'm doing something wrong. Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: Tuesday, August 10, 2004 4:09 AM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] Tested LaLinearSolve(LaGenMatDouble, LaGenMatDouble &, LaGenMatDouble) ? Hi, Jacob (Jack) Gryn schrieb: > ps. if you haven't noticed, I posted the row-order changes to cvs last > night. Yes, thanks a lot. This looks very nice. I have noticed and I also added some more documentation in the header. > I've been trying to solve a matrix equation Ax=b with all 3 variables, > A,x, and b, being LaGenMatDoubles. > > I've used tried the function LaLinearSolve(LaGenMatDouble, > LaGenMatDouble &, LaGenMatDouble), however, my results are clearly not > correct. (Maybe I'm doing something wrong) > > Has this been tested? I've noticed the tGenSolve.cc program does not > contain a test for the linear solve of 3 matricies. I did not test the real-valued functions at all since I only use the complex-valued matrices and functions. It may very well be that there is some kind of bug. However, I would be quite surprised if this is the case since the linear solve routines really do not contain any non-trivial operations. Could you document the problem further? I know for sure the LaLinearSolve with LaGenMatComplex works correct, either for vectors or for matrices. Can you try for X and B vectors and then for some simple matrices? Maybe you could post a simple example (3x3) here? Christian |
From: Christian S. <sti...@tu...> - 2004-08-10 08:09:06
|
Hi, Jacob (Jack) Gryn schrieb: > ps. if you haven't noticed, I posted the row-order changes to cvs last > night. Yes, thanks a lot. This looks very nice. I have noticed and I also added some more documentation in the header. > I've been trying to solve a matrix equation Ax=b with all 3 variables, > A,x, and b, being LaGenMatDoubles. > > I've used tried the function LaLinearSolve(LaGenMatDouble, > LaGenMatDouble &, LaGenMatDouble), however, my results are clearly not > correct. (Maybe I'm doing something wrong) > > Has this been tested? I've noticed the tGenSolve.cc program does not > contain a test for the linear solve of 3 matricies. I did not test the real-valued functions at all since I only use the complex-valued matrices and functions. It may very well be that there is some kind of bug. However, I would be quite surprised if this is the case since the linear solve routines really do not contain any non-trivial operations. Could you document the problem further? I know for sure the LaLinearSolve with LaGenMatComplex works correct, either for vectors or for matrices. Can you try for X and B vectors and then for some simple matrices? Maybe you could post a simple example (3x3) here? Christian |
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-10 01:02:17
|
Hi, I've been trying to solve a matrix equation Ax=b with all 3 variables, A,x, and b, being LaGenMatDoubles. I've used tried the function LaLinearSolve(LaGenMatDouble, LaGenMatDouble &, LaGenMatDouble), however, my results are clearly not correct. (Maybe I'm doing something wrong) Has this been tested? I've noticed the tGenSolve.cc program does not contain a test for the linear solve of 3 matricies. ps. if you haven't noticed, I posted the row-order changes to cvs last night. Jack |
From: Christian S. <sti...@tu...> - 2004-08-06 09:04:54
|
Dear Jack, (sorry, this turned out to be long) Jacob (Jack) Gryn schrieb: > The LaGenMatDouble class has an instance variable =91v=92 of type=20 > LaVectorDouble. I didn=92t notice inheritance in any of the vector or=20 > matrix classes. There are two different vector classes in lapack (for each of=20 {double,COMPLEX,float}), and you're confusing the two here. The instance variable 'v' in the LaGenMatDouble is of type=20 'VectorDouble' (without the 'La'!). That one is defined in vd.h and its=20 class comment says: "Lightwight vector class. (...) This vector is not intended for mathematical denotations, but rather used as building block for other LAPACK++ matrix classes." On the other hand, the actual vector class that should be used by the=20 application is LaVectorDouble (with the La), defined in lavd.h, and that=20 one is actually a derivation of LaGenMatDouble. And it is enforced that=20 only this class is used and not the other, because all functions that=20 use vectors (Blas_Mat_Vec_Mult etc) only accept a LaVectorDouble and=20 *not* a VectorDouble. Therefore we should not touch VectorDouble at all=20 (except for internal optimizations), but we should only work with the=20 LaVectorDouble, and that one is already a derivation from the=20 LaGenMatDouble -- which is what you proposed. > =91virtual=92 class Buffer =96 a virtual class of ANY kind of buffer, t= o be=20 > inherited by DoubleBuffer, FloatBuffer, etc.. The abovementioned class VectorDouble serves as such kind of buffer --=20 except that: 1. the mapping of multiple dimensions into the array index=20 is done in the matrix class itself, not here in the buffer, and 2. there=20 is no common virtual base class that abstracts from the different value=20 element types (double, COMPLEX, float, int). Discussion about the=20 virtual base class below. > =91virtual=92 class LaGenMat (what does the =91Gen=92 stand for anyway?= ) =96=20 > another virtual class to be inherited by LaGenMatDouble, etc=85 =20 Does not exist, because there is no virtual base class. The 'Gen' stands=20 for 'general matrix' as opposed to a symmetric or banded or lower=20 triangle etc matrix. This is explained in the (old) Lapack++ User's=20 Manual that is on the original lapack++1.1 website. > `virtual=92 class Vector =96 a subclass of LaGenMat and derive VectorDo= uble,=20 > etc. from that (alternatively, since most vector operations are matrix=20 > operations, scrap this virtual class and just derive VectorDouble, etc.= .=20 > directly from the specific Matrix classes) The vector classes are already directly derived from the Matrix classes,=20 so this is already implemented according to your idea. > Alternatively, templates could be used (like TNT), but few people truly= =20 > understand how compilers do things when it concerns templates, it would= =20 > be difficult to code. I'm not the guy to brag about my skills, but I'd say that I actually=20 understand templates almost fully (we use them all the time in a=20 different project). However, the whole point of Lapack++ is that it's=20 *only* "a C++ wrapper for the Fortran Lapack functions" -- no more, no=20 less. The template facilities of C++ are very nice, but the would be an=20 extremely bad mapping of the actual fortran lapack functions. Therefore=20 they are quite useless in this context. Alternatively, one could start a whole C++ linear algebra package on its=20 own, which wouldn't be a wrapper to the fortran lapack anymore. This is=20 what TNT tries to do. You should give it a try. However, I tried them=20 and found that the performance was soooooo much worse than the=20 fortran-lapack-based routines. I mean, we are not talking about a factor=20 of two or three here, but rather by a factor of 100 when solving a=20 system of linear equations. This was a good enough argument for me to=20 stick with the "C++ wrapper for the fortran lapack" concept. About a virtual base class: I guess you mean that there should be one=20 common base class for the different element types. However, which=20 functions should be found in the base class anyway? There are some where=20 this is possible. The most important one, operator(), cannot be declared=20 in the base class, because the return type depends on the element type.=20 This concept, "all these classes have the same function, only with=20 different return types", cannot be represented by inheritance. It could=20 only be represented by templates. Hm, maybe templates wouldn't be so bad=20 after all here... on the other hand, I think having a LaGenMat<double>=20 instead of LaGenMatDouble wouldn't be that much different from what we=20 currently have. The Blas_* functions would need to have different=20 specialisations for each element type, because they need to call=20 different fortran functions. In that respect templates wouldn't change=20 much -- or what did you have in mind? > Matrix classes should contain the SVD and Solve functions (keep the old= =20 > syntax for a few versions); i.e, >=20 > LaGenMatDouble A; > A.SVD(U,D,VT); >=20 > This last =91thought=92 may be more controversial, but why not incorpor= ate=20 > some of the blas3 functionality into the matrix classes? Such as using= =20 > operator* to call Blas_Mat_Mat_Mult(), etc? Again the question is: What is the goal of lapack++? My goal is that=20 lapack++ is a C++ wrapper for the fortran lapack routines. Both these=20 ideas on the other hand are specific C++ additions that can be=20 considered as an "additional intelligence" in the C++ part. The question=20 is: Does one really gain that much by the addition of this intelligence=20 in the C++ code? Or do you rather introduce potential performance=20 pitfalls? I don't know how much textbooks about numerics you have read,=20 but I gathered from there that it's far from trivial how things like the=20 operator* can be implemented in a really efficient way. It is far too=20 easy to implement it in the trivial way, allocating lots of temporary=20 copies and so on, but then performance goes really down the drain. In=20 the current concept, the whole question of when and where to perform=20 which matrix operation is left up to the application. The application=20 programmer has to think about which temporary objects he needs and which=20 he doesn't need. For example, in a member function LaGenMatDouble::SVD=20 there is the question what to do with the matrix A. The fortran function=20 DGESDD destroys the input matrix. Should LaGenMatDouble::SVD therefore=20 allocate a temporary copy always? It is impossible to decide this=20 automatically. Therefore I rather stick to the fortran structure and=20 leave it up to the application programmer whether he runs DGESDD resp.=20 LaSVD on a matrix or on a temporary copy of it. > With respect to the COMPLEX definition, I agree that it should be taken= =20 > out; I find, at the moment, that if I try to compile anything (even jus= t=20 > something that only uses LaGenMatDouble=92s), it will not compile. It doesn't? Maybe it should. I'm still unsure. > Meanwhile, I=92ll just work on finishing up the row-wise imports; I=20 > haven=92t had time to get to them as I=92ve been spending my time at wo= rk on=20 > other algorithms. Ok. If you have code to commit, just go ahead. Christian |
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-06 06:09:40
|
The LaGenMatDouble class has an instance variable 'v' of type LaVectorDouble. I didn't notice inheritance in any of the vector or matrix classes. Although I realize that my ideas with inheritance are probably too much for the time being and may need to wait for 3.0 (or whatever), I think the following would be a good ideal and simplify the system significantly from a developer standpoint (any changes to matrix or vector code would usually only need to be made in one central place); I know I'm new to this project, but here is my "Ideal" class structure for the project, let me know what you think (even if it were only for the distant future). 'virtual' class Buffer - a virtual class of ANY kind of buffer, to be inherited by DoubleBuffer, FloatBuffer, etc.. this class will contain code common to all buffer formats. Access and set function calls will take up to 3 dimension parameters (x,y,z) for 3D data (with y and z defaulting to 1 if unused); these calls should be inline and code x,y,z into an index in the 1D buffer. The constructor will set the dimensions (which are type 'protected' to allow access to the inherited classes) 'virtual' class LaGenMat (what does the 'Gen' stand for anyway?) - another virtual class to be inherited by LaGenMatDouble, etc. Functions common to all matrix operations should be put in here and it should contain a pointer to a 'Buffer' - since this class is useless by itself, the derived classes will initialize this to whatever kind of buffer it needs. `virtual' class Vector - a subclass of LaGenMat and derive VectorDouble, etc. from that (alternatively, since most vector operations are matrix operations, scrap this virtual class and just derive VectorDouble, etc.. directly from the specific Matrix classes) Alternatively, templates could be used (like TNT), but few people truly understand how compilers do things when it concerns templates, it would be difficult to code. Matrix classes should contain the SVD and Solve functions (keep the old syntax for a few versions); i.e, LaGenMatDouble A; . A.SVD(U,D,VT); This last 'thought' may be more controversial, but why not incorporate some of the blas3 functionality into the matrix classes? Such as using operator* to call Blas_Mat_Mat_Mult(), etc? With respect to the COMPLEX definition, I agree that it should be taken out; I find, at the moment, that if I try to compile anything (even just something that only uses LaGenMatDouble's), it will not compile. Meanwhile, I'll just work on finishing up the row-wise imports; I haven't had time to get to them as I've been spending my time at work on other algorithms. Jack -----Original Message----- > Ideally, there should be better interoptability between the matrix/vector > data types as there's a lot of duplicate code, and no inheritance. > For example, it would probably be better to treat a vector as an n x 1 > matrix, inheriting all the properties of a matrix, except that if has a > coordinate fixed at 1. This is probably a longer-term thing (maybe for > 3.0), but since the point is to use C++, why not take advantage of its > features. Then, code that, for example, re-orders a matrix upon loading, > doesn't need to be written 6 times, for each data type. The LaVectorDouble is a derived class of LaGenMatDouble, isn't it? And for LaVectorComplex / LaGenMatComplex as well? But the constructors cannot be derived, and, yes, there is some code duplication because of this. The re-ordering code should indeed not be rewritten more than necessary, and it is by definition only necessary in the matrix classes, not in the vector classes. Therefore I thought it is necessary exactly twice: In LaGenMatDouble for doubles and in LaGenMatComplex for COMPLEXs. > Another thing I was wondering, why is it necessary to #define > LA_COMPLEX_SUPPORT in files where no complex numbers are used? This define-macro comes from the old v1.1 lapack++ where it was *disabled* everywhere. Maybe it should be dropped altogether, so that lapackpp will be built with complex support enabled always. If not, then this macro should be the switch to turn on or off the declarations of the complex data types. In that sense the #define should not be needed in files that do not use complex numbers. >>>I did some testing with the current developer CVS version.. >>>I still get "Aborted" if the vector S is of an incorrect size. > >> Right. The code right now will only accept a vector S of the exactly >> correct size. Do you suggest to change this behaviour? I think this >> behaviour is ok. > > I think it's a good idea to check the vector size. My issue is that > the word "Aborted" is kind of useless, when the > LaException::enablePrint(true) > flag is set. Maybe I'm just picky. The word "Aborted" comes from the fact that the library has thrown an exception but the application did not catch it. This is the default behaviour of C++ (more precisely: gcc) with uncaught exceptions, and we cannot do anything about it. Either the application is supposed to have a try-catch-block for LaException, or the user has to use a debugger anyway which will then show the origina of the exception in its backtrace. > I tool a look at the lwork value, according to the documentation at > http://netlib2.cs.utk.edu/lapack/patch/src/dgesdd.f > > LWORK >= 4*min(M,N)*min(M,N) + max(M,N) + 9*min(M,N). > (Larger for more performance) > > I set it to that in both functions that call DGESDD, and it seems to > have fixed the problem! I'm not sure how much higher it should be set > to if we want more performance; I guess its ok for now. > > I have only found the problem with SVDs of LaMatGenDoubles. If this modified lwork definition fixes your problem, then go ahead and commit it. According to the documentation, it is possible to get the optimal size of lwork calculated by DGESDD itself (by an additional call to DGESDD beforehand). In this case this would be the optimal solution, but to be honest, I'm too lazy to implement this :-) If you want to commit something like that, just go ahead. Christian |
From: Christian S. <sti...@tu...> - 2004-08-05 08:43:35
|
Dear Jack, this is great. Thanks for your contributions and thoughts. I am looking forward to see your code in CVS soon :-) We should continue this discussion on the mailing list lapackpp-devel -- this has at least the additional advantage that we can automatically use sourceforge's mailing list archive. You can subscribe on http://lists.sourceforge.net/lists/listinfo/lapackpp-devel Jacob (Jack) Gryn schrieb: > I implemented it yesterday with the enum-style method; just changed it to > use a bool now. Works fine. > > I only did this for LaGenMatDouble's I'll do the other data types as well > and post my changes on CVS shortly. Sure. Either way -- you can commit it only for LaGenMatDouble for now, or you can also commit it for all four important classes at one. If you commit to CVS, please have a look at the (newly added) file README.cvs. It basically emphasizes to add a ChangeLog entry for every commit. Apart from that, you are free to commit your code -- I will be notified of every commit via the lapackpp-cvs mailing list (which you can subscribe as well, if you like to), so I will check every commit anyway and might notify if there are any problems. In short, I happily encourage you to go ahead and we'll sort things out early enough :-)) > Ideally, there should be better interoptability between the matrix/vector > data types as there's a lot of duplicate code, and no inheritance. > For example, it would probably be better to treat a vector as an n x 1 > matrix, inheriting all the properties of a matrix, except that if has a > coordinate fixed at 1. This is probably a longer-term thing (maybe for > 3.0), but since the point is to use C++, why not take advantage of its > features. Then, code that, for example, re-orders a matrix upon loading, > doesn't need to be written 6 times, for each data type. The LaVectorDouble is a derived class of LaGenMatDouble, isn't it? And for LaVectorComplex / LaGenMatComplex as well? But the constructors cannot be derived, and, yes, there is some code duplication because of this. The re-ordering code should indeed not be rewritten more than necessary, and it is by definition only necessary in the matrix classes, not in the vector classes. Therefore I thought it is necessary exactly twice: In LaGenMatDouble for doubles and in LaGenMatComplex for COMPLEXs. > Another thing I was wondering, why is it necessary to #define > LA_COMPLEX_SUPPORT in files where no complex numbers are used? This define-macro comes from the old v1.1 lapack++ where it was *disabled* everywhere. Maybe it should be dropped altogether, so that lapackpp will be built with complex support enabled always. If not, then this macro should be the switch to turn on or off the declarations of the complex data types. In that sense the #define should not be needed in files that do not use complex numbers. >>>I did some testing with the current developer CVS version.. >>>I still get "Aborted" if the vector S is of an incorrect size. > >> Right. The code right now will only accept a vector S of the exactly >> correct size. Do you suggest to change this behaviour? I think this >> behaviour is ok. > > I think it's a good idea to check the vector size. My issue is that > the word "Aborted" is kind of useless, when the > LaException::enablePrint(true) > flag is set. Maybe I'm just picky. The word "Aborted" comes from the fact that the library has thrown an exception but the application did not catch it. This is the default behaviour of C++ (more precisely: gcc) with uncaught exceptions, and we cannot do anything about it. Either the application is supposed to have a try-catch-block for LaException, or the user has to use a debugger anyway which will then show the origina of the exception in its backtrace. > I tool a look at the lwork value, according to the documentation at > http://netlib2.cs.utk.edu/lapack/patch/src/dgesdd.f > > LWORK >= 4*min(M,N)*min(M,N) + max(M,N) + 9*min(M,N). > (Larger for more performance) > > I set it to that in both functions that call DGESDD, and it seems to > have fixed the problem! I'm not sure how much higher it should be set > to if we want more performance; I guess its ok for now. > > I have only found the problem with SVDs of LaMatGenDoubles. If this modified lwork definition fixes your problem, then go ahead and commit it. According to the documentation, it is possible to get the optimal size of lwork calculated by DGESDD itself (by an additional call to DGESDD beforehand). In this case this would be the optimal solution, but to be honest, I'm too lazy to implement this :-) If you want to commit something like that, just go ahead. Christian |