lapackpp-devel Mailing List for Lapack++ (Page 15)
Status: Beta
Brought to you by:
cstim
You can subscribe to this list here.
| 2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(19) |
Sep
(11) |
Oct
|
Nov
(4) |
Dec
(15) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2005 |
Jan
(2) |
Feb
(4) |
Mar
(32) |
Apr
(18) |
May
(3) |
Jun
|
Jul
(1) |
Aug
(4) |
Sep
(13) |
Oct
(5) |
Nov
|
Dec
(1) |
| 2006 |
Jan
|
Feb
(6) |
Mar
(2) |
Apr
(6) |
May
(18) |
Jun
(15) |
Jul
(17) |
Aug
(45) |
Sep
(3) |
Oct
(4) |
Nov
(26) |
Dec
(4) |
| 2007 |
Jan
(11) |
Feb
(14) |
Mar
(1) |
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(2) |
| 2008 |
Jan
|
Feb
(2) |
Mar
|
Apr
(4) |
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2010 |
Jan
(2) |
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(7) |
Dec
|
| 2011 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
(2) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
|
Nov
(14) |
Dec
(1) |
| 2012 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
|
Feb
(2) |
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-11-26 14:07:57
|
That's the one. I implemented it myself, since it's not used in any loops, I don't really need the optimization of lapack. However, if it is implemented in lapack, I think we should have a wrapper for lapack++. I think it's used often enough in the computer vision world. Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: Friday, November 26, 2004 4:31 AM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] Cross Product Dear Jack, Jacob (Jack) Gryn schrieb: > Is there a Cross Product defined in any of the Blass?++ libraries? Do you mean the cross product as explained here http://mathworld.wolfram.com/CrossProduct.html which is a function of R^3 x R^3 -> R^3 ? No, this function is not available. From what I remember, the cross product has so limited use in numerics that the numerical people wouldn't implement such a function. But you are really looking for this function? Christian |
|
From: Christian S. <sti...@tu...> - 2004-11-26 09:31:07
|
Dear Jack, Jacob (Jack) Gryn schrieb: > Is there a Cross Product defined in any of the Blass?++ libraries? Do you mean the cross product as explained here http://mathworld.wolfram.com/CrossProduct.html which is a function of R^3 x R^3 -> R^3 ? No, this function is not available. From what I remember, the cross product has so limited use in numerics that the numerical people wouldn't implement such a function. But you are really looking for this function? Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-11-25 23:48:19
|
Hi, Is there a Cross Product defined in any of the Blass?++ libraries? Let me know. Thanks Jack |
|
From: NAKATA M. <ch...@ma...> - 2004-09-09 05:43:40
|
Dear all,
I found an interesting difference between FreeBSD's g2c.h
and f2c.h
g2c.h
/* F2C_INTEGER will normally be `int' but would be `long' on 16-bit systems */
/* we assume short, float are OK */
typedef int /* long int */ integer;
typedef unsigned long int /* long */ uinteger;
typedef char *address;
typedef short int shortint;
typedef float real;
typedef double doublereal;
typedef struct { real r, i; } complex;
typedef struct { doublereal r, i; } doublecomplex;
typedef int /* long int */ logical;
typedef short int shortlogical;
typedef char logical1;
typedef char integer1;
typedef long long int /* long long */ longint; /* system-dependent */
typedef unsigned long long int /* long long */ ulongint; /* system-dependent */
f2c.h
typedef long int integer;
typedef char *address;
typedef short int shortint;
typedef float f2c_real;
typedef double doublereal;
typedef struct { f2c_real r, i; } f2c_complex;
/** @brief Complex type that is used in LAPACK++ internally.
this induced type conflicts. how do we do...?
--nakata maho
|
|
From: NAKATA M. <ch...@ma...> - 2004-09-09 02:13:59
|
In Message-ID: <413...@tu...> Christian Stimming <sti...@tu...> wrote: Dear Christian, > I don't use that function and I don't quite understand what it should > do. However, if you say it segfaults then that's a bad thing and your > provided patch is a good thing :-) Therefore I directly applied this to CVS. thanks! However, my investigation seems to be somewhat odd, however I don't have better clue which I sent you the last e-mail. o LaSymmMatDouble is inherted by triangle matrices, and class of triangle matrices is inherted by lagenmatdouble. so apparently we have full matrix for LaSymmMatDouble. However, my stupid patch just copies from LaSymmMatDouble to LaGenMatDouble without thought. > If you have any further questions, do not hesitate to ask me. Even > better, you can subscribe to the mailing list lapackpp-devel and discuss > these questions there. This has the additional benefit that the postings > will be archived. okay, I'll subscribe this ML. And I noticed before that Lapack++ (also ver 1.1 and 2.0) is not usable for FreeBSD/amd64, passing a invalid type of pointer. Now I'm investigating... thank you very much for your effort, since I have been a fun of lapack++, which had not been maintained for long time... See you at mailing list! --nakata maho |
|
From: Christian S. <sti...@tu...> - 2004-09-08 11:39:20
|
Dear Maho, thank you very much for your contribution. I have added this to CVS. NAKATA Maho schrieb: > 1. void LaEigSolveVecIP(LaSymmMatDouble &S, LaVectorDouble &eigvals); > solves symmetric matrices in place and also return eigenvectors > as we specified. I don't use that function and I don't quite understand what it should do. However, if you say it segfaults then that's a bad thing and your provided patch is a good thing :-) Therefore I directly applied this to CVS. > 2. Definition of erf for FreeBSD is incorrect. > is it possible to remove throw() for FreeBSD? It is not possible to simply remove the throw(), since on Linux the declaration in math.h includes this throw(). However, I just checked whether this declaration can be removed anyway, and in fact this seems to be the case. I have therefore removed this declaration at all, so it shouldn't give you any problems. If you have any further questions, do not hesitate to ask me. Even better, you can subscribe to the mailing list lapackpp-devel and discuss these questions there. This has the additional benefit that the postings will be archived. Best regards, Christian Stimming |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-09-03 21:20:11
|
Great, thanks.. I'll do some testing next week. Jack -----Original Message----- From: lap...@li... [mailto:lap...@li...] On Behalf Of Christian Stimming Sent: September 3, 2004 5:12 PM To: lap...@li... Subject: Re: [Lapackpp-devel] SVD that ignores U, Sigma or VT Am Freitag, 3. September 2004 21:34 schrieb Jacob \(Jack\) Gryn: > It might get a little confusing to implement, or to understand, but as long > as things don't change for those who want the FULL SVD, it should be ok. > > The only other probs I can think of is that the LWORK variable may be > different if JOBZ is different; also, if a number is too big (hence the > reason to ignore part of the SVD, LWORK may also get much larger. > > Would you mind implementing it? Ok, I just committed this into lasvd.h and lasvd.cc. Do you think you could test all cases of interest? I only made sure that the code makes sense, but I didn't actually test any of the routines -- they might well be broken. Also, I committed the matrix multiplication when you only want the diagonal, in blas3pp.h (formerly blas3++.h). Again, you need to check whether it calculates the right thing -- it might as well crash on each call. But maybe it works :-) Christian ------------------------------------------------------- This SF.Net email is sponsored by BEA Weblogic Workshop FREE Java Enterprise J2EE developer tools! Get your free copy of BEA WebLogic Workshop 8.1 today. http://ads.osdn.com/?ad_id=5047&alloc_id=10808&op=click _______________________________________________ lapackpp-devel mailing list lap...@li... https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
|
From: Christian S. <sti...@tu...> - 2004-09-03 21:14:13
|
Am Freitag, 3. September 2004 21:34 schrieb Jacob \(Jack\) Gryn: > It might get a little confusing to implement, or to understand, but as long > as things don't change for those who want the FULL SVD, it should be ok. > > The only other probs I can think of is that the LWORK variable may be > different if JOBZ is different; also, if a number is too big (hence the > reason to ignore part of the SVD, LWORK may also get much larger. > > Would you mind implementing it? Ok, I just committed this into lasvd.h and lasvd.cc. Do you think you could test all cases of interest? I only made sure that the code makes sense, but I didn't actually test any of the routines -- they might well be broken. Also, I committed the matrix multiplication when you only want the diagonal, in blas3pp.h (formerly blas3++.h). Again, you need to check whether it calculates the right thing -- it might as well crash on each call. But maybe it works :-) Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-09-03 19:36:36
|
Thanks. I wrote a code doing the low-level multiplication, since I have yet to play with LaIndex, but I'll re-write it eventually. Maybe this should be implemented into Blas++? Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: September 3, 2004 5:15 AM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] Matrix Multiplication to return only diagonal? Dear Jack, I have searched through all of the available BLAS documentation but I did not find any such routine. You could write one by yourself that will calculate the dot product between all rows of the left and columns of the right matrix: // calculate only the diagonal of A times B int msize = std::min(A.size(0), B.size(1)); LaGenVectorDouble result(msize); assert(A.size(1) == B.size(0)); for (int i=0; i < msize; i++) result(i) = Blas_Dot_Prod(A(LaIndex(i), LaIndex(0,A.size(1)), B(LaIndex(0,B.size(0)), LaIndex(i))); As always, you would need to check whether this code really does the right thing. I'm always unsure whether I got the indices right and I don't confuse the "beginning with 0" or the last index. Christian Jacob (Jack) Gryn schrieb: > Hi, > > > > I was wondering if you know of any functions that will do a matrix > multiplication only to obtain the diagonal of the resulting > multiplication (that avoids the excess computations and temporary space > needed)? > > > > Jack > |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-09-03 19:34:21
|
It might get a little confusing to implement, or to understand, but as long as things don't change for those who want the FULL SVD, it should be ok. The only other probs I can think of is that the LWORK variable may be different if JOBZ is different; also, if a number is too big (hence the reason to ignore part of the SVD, LWORK may also get much larger. Would you mind implementing it? Thanks Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: September 3, 2004 4:23 AM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] SVD that ignores U, Sigma or VT Jacob (Jack) Gryn schrieb: > I'm writing some code that only uses data from VT of an SVD, and don't > need the data in U and Sigma. Although I can ignore it; the resulting U > is significantly large (512^2x512^2), which is too large a matrix for > the system to handle; and is clearly a waste. Absolutely agreed. I guess you mean the JOBZ argument of the LAPACK function dgesdd http://docs.sun.com/source/817-6700/dgesdd.html, where currently only the settings 'A' and 'N' are used, and instead you would like to use either the 'O' or the 'S' setting. > I've noticed that there is a function that ignores both U and VT, by > essentially setting ldv and ldu to 0, that seemingly tells LAPACK to > ignore the data. The point here is that the argument JOBZ was set to 'N'. Only this is the reason for LAPACK not to use the U and VT arguments. Look into the man page mentioned above. > Maybe it would be better just to use a single LaSVD_IP > function, and allow uninitialized LaGenMat[Double/Complex] as input. > If, for example U is declared with the default constructor, since > U.gdim(0) == 0, I would guess that LAPACK will ignore writing output to U. For people who only want the singular values but not the vectors, I would rather want to leave that extra function in there, since it is totally unambiguous what this function will do. But for your case where you want only some of the singular vectors, there are several possibilities. As you say, one possibility would be to do different things according to what the input arguments U and VT have been set to. Let's assume A to be M-by-N and M != N. Right now we do the following: If (U.size == M-by-M) && (VT.size == N-by-N) calculate all singular vectors (JOBZ='A') else error. Instead, we could also do the following: MNmin = min(M,N); If (U.size == M-by-M) && (VT.size == N-by-N) calculate all singular vectors (JOBZ='A') else if (U.size == M-by-MNmin) && (VT.size == MNmin-by-N) calculate the MNmin first singular vectors (JOBZ='S') else if (M >= N) && (U.size = 0) && (VT.size == N-by-N) calculate full VT and first N vectors of U (JOBZ='O') else if (M < N) && (U.size = M-by-M) && (VT.size = 0) calculate full U and first M vectors of VT (JOBZ='O') else error Does this meet your requirements? On the other hand, this increases the risk of the programmer to be totally confused about what the arguments are supposed to be. But nevertheless I think it would be possible. Do you want me to implement this? Or do you want to give it a try for yourself? Christian |
|
From: Christian S. <sti...@tu...> - 2004-09-03 09:15:12
|
Dear Jack, I have searched through all of the available BLAS documentation but I did not find any such routine. You could write one by yourself that will calculate the dot product between all rows of the left and columns of the right matrix: // calculate only the diagonal of A times B int msize = std::min(A.size(0), B.size(1)); LaGenVectorDouble result(msize); assert(A.size(1) == B.size(0)); for (int i=0; i < msize; i++) result(i) = Blas_Dot_Prod(A(LaIndex(i), LaIndex(0,A.size(1)), B(LaIndex(0,B.size(0)), LaIndex(i))); As always, you would need to check whether this code really does the right thing. I'm always unsure whether I got the indices right and I don't confuse the "beginning with 0" or the last index. Christian Jacob (Jack) Gryn schrieb: > Hi, > > > > I was wondering if you know of any functions that will do a matrix > multiplication only to obtain the diagonal of the resulting > multiplication (that avoids the excess computations and temporary space > needed)? > > > > Jack > |
|
From: Christian S. <sti...@tu...> - 2004-09-03 08:24:07
|
Jacob (Jack) Gryn schrieb: > I=92m writing some code that only uses data from VT of an SVD, and don=92= t=20 > need the data in U and Sigma. Although I can ignore it; the resulting = U=20 > is significantly large (512^2x512^2), which is too large a matrix for=20 > the system to handle; and is clearly a waste. Absolutely agreed. I guess you mean the JOBZ argument of the LAPACK=20 function dgesdd http://docs.sun.com/source/817-6700/dgesdd.html, where=20 currently only the settings 'A' and 'N' are used, and instead you would=20 like to use either the 'O' or the 'S' setting. > I=92ve noticed that there is a function that ignores both U and VT, by=20 > essentially setting ldv and ldu to 0, that seemingly tells LAPACK to=20 > ignore the data. =20 The point here is that the argument JOBZ was set to 'N'. Only this is=20 the reason for LAPACK not to use the U and VT arguments. Look into the=20 man page mentioned above. > Maybe it would be better just to use a single LaSVD_IP=20 > function, and allow uninitialized LaGenMat[Double/Complex] as input.=20 > If, for example U is declared with the default constructor, since=20 > U.gdim(0) =3D=3D 0, I would guess that LAPACK will ignore writing outpu= t to U. For people who only want the singular values but not the vectors, I=20 would rather want to leave that extra function in there, since it is=20 totally unambiguous what this function will do. But for your case where=20 you want only some of the singular vectors, there are several=20 possibilities. As you say, one possibility would be to do different=20 things according to what the input arguments U and VT have been set to.=20 Let's assume A to be M-by-N and M !=3D N. Right now we do the following: If (U.size =3D=3D M-by-M) && (VT.size =3D=3D N-by-N) calculate all singular vectors (JOBZ=3D'A') else error. Instead, we could also do the following: MNmin =3D min(M,N); If (U.size =3D=3D M-by-M) && (VT.size =3D=3D N-by-N) calculate all singular vectors (JOBZ=3D'A') else if (U.size =3D=3D M-by-MNmin) && (VT.size =3D=3D MNmin-by-N) calculate the MNmin first singular vectors (JOBZ=3D'S') else if (M >=3D N) && (U.size =3D 0) && (VT.size =3D=3D N-by-N) calculate full VT and first N vectors of U (JOBZ=3D'O') else if (M < N) && (U.size =3D M-by-M) && (VT.size =3D 0) calculate full U and first M vectors of VT (JOBZ=3D'O') else error Does this meet your requirements? On the other hand, this increases the=20 risk of the programmer to be totally confused about what the arguments=20 are supposed to be. But nevertheless I think it would be possible. Do=20 you want me to implement this? Or do you want to give it a try for yourse= lf? Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-09-02 19:53:59
|
Hi, I'm writing some code that only uses data from VT of an SVD, and don't need the data in U and Sigma. Although I can ignore it; the resulting U is significantly large (512^2x512^2), which is too large a matrix for the system to handle; and is clearly a waste. I've noticed that there is a function that ignores both U and VT, by essentially setting ldv and ldu to 0, that seemingly tells LAPACK to ignore the data. Maybe it would be better just to use a single LaSVD_IP function, and allow uninitialized LaGenMat[Double/Complex] as input. If, for example U is declared with the default constructor, since U.gdim(0) == 0, I would guess that LAPACK will ignore writing output to U. Any thoughts on this? Jack |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-09-02 18:16:20
|
Hi, I was wondering if you know of any functions that will do a matrix multiplication only to obtain the diagonal of the resulting multiplication (that avoids the excess computations and temporary space needed)? Jack |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-20 22:19:10
|
Sorry, just to clarify, the fix was for Linear Solving of LaGenMatDouble equations. Jack -----Original Message----- From: Jacob (Jack) Gryn [mailto:jg...@cs...] Sent: August 20, 2004 6:17 PM To: 'Christian Stimming' Cc: 'lap...@li...' Subject: RE: [Lapackpp-devel] LaLinearSolve not totally exact Hi, I'm going to commit this fix in linslv.cc to CVS; at the moment, I don't have time to test and implement the same things for the other matricies. Jack -----Original Message----- From: lap...@li... [mailto:lap...@li...] On Behalf Of Christian Stimming Sent: August 11, 2004 4:44 PM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] LaLinearSolve not totally exact Am Mittwoch, 11. August 2004 02:09 schrieb Jacob \(Jack\) Gryn: > I believe I solved the problem. > > The in the linslv.cc function, Xtmp should be a copy of B; however, at > no point is an actual copy made. Xtmp is left at 0's before making > the dgels call. > > I replaced line 219 with the line Xtmp=B; and it seems to fix the problem. > > The output result appears to be correct, at least for an AX=A style > problem, where X should return the identity matrix. That's good :-))) . I think I really didn't work much with the QRSolve functions, so it might well be the case that you just discovered and fixed a bug. However, I thought in line 219 this assignment with LaIndex on the left and on the right should actually do some copying? Whatever. If you say it doesn't, then it probably doesn't and needs to be fixed. If you replace that line with a direct operator=, doesn't it run into some problems with the matrix dimension? The comment there seems to imply this. But of course that comment may well be wrong. If you see that it really is fixed by your proposal, then just go ahead and commit it. > Although it actually returns I + some small values in the order of > 10e-17; whereas CLAPACK returns 0's in these places for the same > problem. Is this only something that displays with higher precision, > or is there actually a > 10e-17 error intruduced somewhere? I believe the 1e-17 is the actual epsilon, the machine precision. I see them more than once at places where analytically a zero (like, zero) should be found. But then, in real-world signals there is noise anyway, so these small numbers can really safely considered to be zero. Or in other words, I thought this is just the normal effect of finite-precision arithmetic. Christian ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ lapackpp-devel mailing list lap...@li... https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-20 22:17:02
|
Hi, I'm going to commit this fix in linslv.cc to CVS; at the moment, I don't have time to test and implement the same things for the other matricies. Jack -----Original Message----- From: lap...@li... [mailto:lap...@li...] On Behalf Of Christian Stimming Sent: August 11, 2004 4:44 PM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] LaLinearSolve not totally exact Am Mittwoch, 11. August 2004 02:09 schrieb Jacob \(Jack\) Gryn: > I believe I solved the problem. > > The in the linslv.cc function, Xtmp should be a copy of B; however, at > no point is an actual copy made. Xtmp is left at 0's before making > the dgels call. > > I replaced line 219 with the line Xtmp=B; and it seems to fix the problem. > > The output result appears to be correct, at least for an AX=A style > problem, where X should return the identity matrix. That's good :-))) . I think I really didn't work much with the QRSolve functions, so it might well be the case that you just discovered and fixed a bug. However, I thought in line 219 this assignment with LaIndex on the left and on the right should actually do some copying? Whatever. If you say it doesn't, then it probably doesn't and needs to be fixed. If you replace that line with a direct operator=, doesn't it run into some problems with the matrix dimension? The comment there seems to imply this. But of course that comment may well be wrong. If you see that it really is fixed by your proposal, then just go ahead and commit it. > Although it actually returns I + some small values in the order of > 10e-17; whereas CLAPACK returns 0's in these places for the same > problem. Is this only something that displays with higher precision, > or is there actually a > 10e-17 error intruduced somewhere? I believe the 1e-17 is the actual epsilon, the machine precision. I see them more than once at places where analytically a zero (like, zero) should be found. But then, in real-world signals there is noise anyway, so these small numbers can really safely considered to be zero. Or in other words, I thought this is just the normal effect of finite-precision arithmetic. Christian ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ lapackpp-devel mailing list lap...@li... https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
|
From: Christian S. <sti...@tu...> - 2004-08-14 08:11:23
|
Am Freitag, 13. August 2004 20:58 schrieb Jacob \(Jack\) Gryn: > Is there a function that calculates the norm of a matrix? Maybe we should > implement it? In blas1++.h, there are several norms for vectors. In blas++.h, there is Norm_Inf for most matrices. Norm2 should probably be added, too, somewhere. Funny that I didn't need it so far... > How about pointwise operators? i.e. a pointwise division.. You mean element-wise? Is there a lapack (blas) procedure for that? Probably not. (Seems to me that this operation is not needed for numerics.) Anyway, yes, you can implement this somewhere as an auxiliary function. > I also have an unrelated question. Can operator's be created for any ascii > character/string, or they are limited to the same operators already used by > C++? > > For example, if I wanted to make a pointwise multiplier, could I do > operator.*= (note the dot before the asterisk) I don't think that's possible. On the other hand I'm not sure. Why don't you look that up in e.g. the Stroustrup book... Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-13 18:58:31
|
Is there a function that calculates the norm of a matrix? Maybe we should implement it? How about pointwise operators? i.e. a pointwise division.. I also have an unrelated question. Can operator's be created for any ascii character/string, or they are limited to the same operators already used by C++? For example, if I wanted to make a pointwise multiplier, could I do operator.*= (note the dot before the asterisk) Jack |
|
From: Christian S. <sti...@tu...> - 2004-08-12 08:51:00
|
Dear Jack, Jacob (Jack) Gryn schrieb: >> This is the key word here: "calling a virtual" function/method. > > I never thought about these vtable 'lookup' issues; I guess I haven't done > too much optimization of this nature. Do compilers not cache these things? > I think operator() is already inline, but if its not, it probably should be; > is the overhead for these lookups still an issue with inline code? Inlining and virtual functions are mutually exclusive. Either you get one or you get the other -- it's a contradiction to have both. Inlining means that *at compile time* it is known which function is going to be called, thus the compiler replaces the function call by the inlined code. A virtual function, however, means that the actual function to be called is determined *at run time*: The compiler only sees that there is a pointer to the base class, but the compiler doesn't know which derived class that pointer actually points to, so it doesn't know which virtual function (i.e. function of the derived class) will actually be executed. This can only be determined at runtime, by the mentioned "vtable lookup". >> COMPLEX should not be made into a class > > I'm sure there are other simple workarounds for operator<< and COMPLEX. > Worst case scenario, you have a single operator<< function for all matrices, > except for complex. By the way there already is a operator<< for COMPLEX in lacomplex.h -- did you look for something else? >> If you have any ideas about *adding* such operators so that a user can >> optionally use them, then just go ahead. Yes, operator*= might be a >> good idea. > > Should I add things like operator*, operator+ with a warning in the > comments? Or just operator*=, etc.? For now I would prefer to only have operator*= and similar, where it is totally clear that the operations work on specific matrices. > A few more questions.. > > A) I've modified the operator<< algorithm in gmd.cc (at least for personal > use) to output matricies in a fashion that can be easily pasted into MatLab. > Ultimately, I'd like to create some sort of a display flag to select > standard, matlab, or maple output. Any thoughts about how this flag should > be implemented? Shall I create a global variable somewhere? In theory, global variables are bad and should be avoided. In practice, we already have some, like the LaException::_print static variable. Maybe there could be a similar static variable in some class? But it's a bit unclear where... la::complex, LaGenMatComplex, LaException... I'm unsure. The output function that is used is the one in <lacomplex> line 480, so such a flag should probably be put into la::complex. > B) Are there any Lapack (or Lapackpp) functions for transpose or inverse? I > know lapack takes transposed input; however, we've essentially forced data > to work in 'N'ormal mode with the wrappers. Although a physical transpose > takes time, the fixing of this flag is reasonable, as it avoids the > complexity of dragging this 'N'/'T' flag around. However, it should be > replaced with the ability to actually do a transpose. Transpose: Are you really sure that you need to calculate the transpose/hermitian explicitly? I found that everytime when I thought I need to that, eventually I saw that I only needed to choose the correct Blas_Mat_xy function (blas2++.h, blas3++.h) which uses the matrix either normal or transposed. Calculating the transpose explicitly usually should not be necessary at all. For which operations would you need that? Inverse: Run a LaLinearSolve(A, Ainv, eye) where A is the input, Ainv is the output, and eye is an identity matrix. There you are. Again, you should rethink whether you really need to calculate the inverse -- or whether you rather need to solve a system of linear equation. In 90% of the cases, talking about the inverse means the latter, and directly solving that system (instead of calculating the inverse and then multiplying with the inverse) is way more efficient. Gee, maybe I should start adding my explanations directly into the documentation :-))) I like these discussions. Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-11 22:08:17
|
> This is the key word here: "calling a virtual" function/method. Calling a virtual function is a *considerable* overhead in C++, because > there is an additional lookup for the right virtual function in the object's vtable. This overhead is comparable to things like a bounds > check on *every* access -- they are nice and negligible if you do some simple things, but if you really want to crunch numbers, then you > rather try to get along without them. Since the operator() IMHO is in fact a speed-critical method (as opposed to e.g. > the constructors), it should rather be avoided to switch this to a virtual method (unless there is some really really strong argument for > doing it anyway). I never thought about these vtable 'lookup' issues; I guess I haven't done too much optimization of this nature. Do compilers not cache these things? I think operator() is already inline, but if its not, it probably should be; is the overhead for these lookups still an issue with inline code? > COMPLEX should not be made into a class for a simple reason: The type COMPLEX exists only to have *the* fortran-compatible complex number > type around. By definition we cannot change that type or make it into a class -- it is fixed to be the thing that fortran expects it to be. > This is why there is this class LaComplex (or la::complex) -- these are my try to provice a "nicer" C++ complex data type. But, as its > documentation says, these are only wrappers to make the COMPLEX handling easier. At the interface to fortran, Lapack++ has and will have to > deal with this stupid simple COMPLEX. I'm sure there are other simple workarounds for operator<< and COMPLEX. Worst case scenario, you have a single operator<< function for all matrices, except for complex. > If you have any ideas about *adding* such operators so that a user can optionally use them, then just go ahead. Yes, operator*= might be a > good idea. Should I add things like operator*, operator+ with a warning in the comments? Or just operator*=, etc.? > Thanks for your comments. Do you get progress with lapack++ in your application? I hope so. I'm getting some progress now that I've gotten the SVD and LinearSolve working, thanks.. A few more questions.. A) I've modified the operator<< algorithm in gmd.cc (at least for personal use) to output matricies in a fashion that can be easily pasted into MatLab. Ultimately, I'd like to create some sort of a display flag to select standard, matlab, or maple output. Any thoughts about how this flag should be implemented? Shall I create a global variable somewhere? B) Are there any Lapack (or Lapackpp) functions for transpose or inverse? I know lapack takes transposed input; however, we've essentially forced data to work in 'N'ormal mode with the wrappers. Although a physical transpose takes time, the fixing of this flag is reasonable, as it avoids the complexity of dragging this 'N'/'T' flag around. However, it should be replaced with the ability to actually do a transpose. Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: August 11, 2004 4:53 PM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] RE: Lapack++ column ordering, derivations, exceptions, lwork size Am Mittwoch, 11. August 2004 22:32 schrieb Jacob \(Jack\) Gryn: > Just to respond to some previous comments I haven't had a chance to > before.. sure > > About a virtual base class: I guess you mean that there should be > > one common base class for the different element types. > > I think operator() could be done if the class hierarchy is setup properly. > If in, for example, a matrix class, instead of using specific > VectorDouble, VectorComplex, etc.. There is one base Buffer class, > and there is one base Matrix class, the operator() would call the base > operator() that translates the 2D coordinates into 1D; it then does > the lookup by calling the virtual > operator() in the Buffer class. This is the key word here: "calling a virtual" function/method. Calling a virtual function is a *considerable* overhead in C++, because there is an additional lookup for the right virtual function in the object's vtable. This overhead is comparable to things like a bounds check on *every* access -- they are nice and negligible if you do some simple things, but if you really want to crunch numbers, then you rather try to get along without them. Since the operator() IMHO is in fact a speed-critical method (as opposed to e.g. the constructors), it should rather be avoided to switch this to a virtual method (unless there is some really really strong argument for doing it anyway). > Another example, may be operator<< . All of them are identical, > except for COMPLEX; however, we still only need one operator<< if > COMPLEX is made into a class with its own OPERATOR<<. COMPLEX should not be made into a class for a simple reason: The type COMPLEX exists only to have *the* fortran-compatible complex number type around. By definition we cannot change that type or make it into a class -- it is fixed to be the thing that fortran expects it to be. This is why there is this class LaComplex (or la::complex) -- these are my try to provice a "nicer" C++ complex data type. But, as its documentation says, these are only wrappers to make the COMPLEX handling easier. At the interface to fortran, Lapack++ has and will have to deal with this stupid simple COMPLEX. > > Therefore I rather stick to the fortran structure and leave it up to > > the application programmer whether he runs DGESDD resp. LaSVD on a > > matrix or on a temporary copy of it. > > I see your point. Although, one may want the option of having the > simplicity of such operators, as long as their forewarned that such > 'intelligence' comes at an expense. It should be noted that > operator*= can be done easily while overwriting the original matrix. If you have any ideas about *adding* such operators so that a user can optionally use them, then just go ahead. Yes, operator*= might be a good idea. Thanks for your comments. Do you get progress with lapack++ in your application? I hope so. Christian |
|
From: Christian S. <sti...@tu...> - 2004-08-11 20:59:08
|
Am Mittwoch, 11. August 2004 22:32 schrieb Jacob \(Jack\) Gryn: > Just to respond to some previous comments I haven't had a chance to > before.. sure > > About a virtual base class: I guess you mean that there should be one > > common base class for the different element types. > > I think operator() could be done if the class hierarchy is setup properly. > If in, for example, a matrix class, instead of using specific VectorDouble, > VectorComplex, etc.. There is one base Buffer class, and there is one base > Matrix class, the operator() would call the base operator() that translates > the 2D coordinates into 1D; it then does the lookup by calling the virtual > operator() in the Buffer class. This is the key word here: "calling a virtual" function/method. Calling a virtual function is a *considerable* overhead in C++, because there is an additional lookup for the right virtual function in the object's vtable. This overhead is comparable to things like a bounds check on *every* access -- they are nice and negligible if you do some simple things, but if you really want to crunch numbers, then you rather try to get along without them. Since the operator() IMHO is in fact a speed-critical method (as opposed to e.g. the constructors), it should rather be avoided to switch this to a virtual method (unless there is some really really strong argument for doing it anyway). > Another example, may be operator<< . All of them are identical, except for > COMPLEX; however, we still only need one operator<< if COMPLEX is made into > a class with its own OPERATOR<<. COMPLEX should not be made into a class for a simple reason: The type COMPLEX exists only to have *the* fortran-compatible complex number type around. By definition we cannot change that type or make it into a class -- it is fixed to be the thing that fortran expects it to be. This is why there is this class LaComplex (or la::complex) -- these are my try to provice a "nicer" C++ complex data type. But, as its documentation says, these are only wrappers to make the COMPLEX handling easier. At the interface to fortran, Lapack++ has and will have to deal with this stupid simple COMPLEX. > > Therefore I rather stick to the fortran > > structure and leave it up to the application programmer whether he runs > > DGESDD resp. LaSVD on a matrix or on a temporary copy of it. > > I see your point. Although, one may want the option of having the > simplicity of such operators, as long as their forewarned that such > 'intelligence' comes at an expense. It should be noted that operator*= can > be done easily while overwriting the original matrix. If you have any ideas about *adding* such operators so that a user can optionally use them, then just go ahead. Yes, operator*= might be a good idea. Thanks for your comments. Do you get progress with lapack++ in your application? I hope so. Christian |
|
From: Christian S. <sti...@tu...> - 2004-08-11 20:49:52
|
Am Mittwoch, 11. August 2004 02:09 schrieb Jacob \(Jack\) Gryn: > I believe I solved the problem. > > The in the linslv.cc function, Xtmp should be a copy of B; however, at no > point is an actual copy made. Xtmp is left at 0's before making the dgels > call. > > I replaced line 219 with the line Xtmp=B; and it seems to fix the problem. > > The output result appears to be correct, at least for an AX=A style > problem, where X should return the identity matrix. That's good :-))) . I think I really didn't work much with the QRSolve functions, so it might well be the case that you just discovered and fixed a bug. However, I thought in line 219 this assignment with LaIndex on the left and on the right should actually do some copying? Whatever. If you say it doesn't, then it probably doesn't and needs to be fixed. If you replace that line with a direct operator=, doesn't it run into some problems with the matrix dimension? The comment there seems to imply this. But of course that comment may well be wrong. If you see that it really is fixed by your proposal, then just go ahead and commit it. > Although it actually returns I + some small values in the order of 10e-17; > whereas CLAPACK returns 0's in these places for the same problem. Is this > only something that displays with higher precision, or is there actually a > 10e-17 error intruduced somewhere? I believe the 1e-17 is the actual epsilon, the machine precision. I see them more than once at places where analytically a zero (like, zero) should be found. But then, in real-world signals there is noise anyway, so these small numbers can really safely considered to be zero. Or in other words, I thought this is just the normal effect of finite-precision arithmetic. Christian |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-11 20:32:31
|
Just to respond to some previous comments I haven't had a chance to before..
> About a virtual base class: I guess you mean that there should be one
common base class for the different element types. However, which
> functions should be found in the base class anyway? There are some where
this is possible. The most important one, operator(), cannot be
> declared in the base class, because the return type depends on the element
type.
> This concept, "all these classes have the same function, only with
different return types", cannot be represented by inheritance. It could
> only be represented by templates. Hm, maybe templates wouldn't be so bad
after all here... on the other hand, I think having a
> LaGenMat<double> instead of LaGenMatDouble wouldn't be that much different
from what we currently have. The Blas_* functions would need to
> have different specialisations for each element type, because they need to
call different fortran functions. In that respect templates
> wouldn't change much -- or what did you have in mind?
I think operator() could be done if the class hierarchy is setup properly.
If in, for example, a matrix class, instead of using specific VectorDouble,
VectorComplex, etc.. There is one base Buffer class, and there is one base
Matrix class, the operator() would call the base operator() that translates
the 2D coordinates into 1D; it then does the lookup by calling the virtual
operator() in the Buffer class.
Another example, may be operator<< . All of them are identical, except for
COMPLEX; however, we still only need one operator<< if COMPLEX is made into
a class with its own OPERATOR<<.
There are probably a few other operators that can be done.
> Again the question is: What is the goal of lapack++? My goal is that
lapack++ is a C++ wrapper for the fortran lapack routines. Both these
> ideas on the other hand are specific C++ additions that can be considered
as an "additional intelligence" in the C++ part. The question
> is: Does one really gain that much by the addition of this intelligence in
the C++ code? Or do you rather introduce potential performance
> pitfalls? I don't know how much textbooks about numerics you have read,
but I gathered from there that it's far from trivial how things like > the
operator* can be implemented in a really efficient way. It is far too easy
to implement it in the trivial way, allocating lots of
> temporary copies and so on, but then performance goes really down the
drain. In the current concept, the whole question of when and where to
> perform which matrix operation is left up to the application. The
application programmer has to think about which temporary objects he needs
> and which he doesn't need. For example, in a member function
LaGenMatDouble::SVD there is the question what to do with the matrix A. The
> fortran function DGESDD destroys the input matrix. Should
LaGenMatDouble::SVD therefore allocate a temporary copy always? It is
impossible to
> decide this automatically. Therefore I rather stick to the fortran
structure and leave it up to the application programmer whether he runs
> DGESDD resp. LaSVD on a matrix or on a temporary copy of it.
I see your point. Although, one may want the option of having the
simplicity of such operators, as long as their forewarned that such
'intelligence' comes at an expense. It should be noted that operator*= can
be done easily while overwriting the original matrix.
Jack
-----Original Message-----
From: lap...@li...
[mailto:lap...@li...] On Behalf Of Christian
Stimming
Sent: August 6, 2004 5:05 AM
To: Jacob (Jack) Gryn
Cc: lap...@li...
Subject: Re: [Lapackpp-devel] RE: Lapack++ column ordering, derivations,
exceptions, lwork size
Dear Jack,
(sorry, this turned out to be long)
Jacob (Jack) Gryn schrieb:
> The LaGenMatDouble class has an instance variable 'v' of type
> LaVectorDouble. I didn't notice inheritance in any of the vector or
> matrix classes.
There are two different vector classes in lapack (for each of
{double,COMPLEX,float}), and you're confusing the two here.
The instance variable 'v' in the LaGenMatDouble is of type 'VectorDouble'
(without the 'La'!). That one is defined in vd.h and its class comment says:
"Lightwight vector class. (...) This vector is not intended for mathematical
denotations, but rather used as building block for other LAPACK++ matrix
classes."
On the other hand, the actual vector class that should be used by the
application is LaVectorDouble (with the La), defined in lavd.h, and that one
is actually a derivation of LaGenMatDouble. And it is enforced that only
this class is used and not the other, because all functions that use vectors
(Blas_Mat_Vec_Mult etc) only accept a LaVectorDouble and
*not* a VectorDouble. Therefore we should not touch VectorDouble at all
(except for internal optimizations), but we should only work with the
LaVectorDouble, and that one is already a derivation from the LaGenMatDouble
-- which is what you proposed.
> 'virtual' class Buffer - a virtual class of ANY kind of buffer, to be
> inherited by DoubleBuffer, FloatBuffer, etc..
The abovementioned class VectorDouble serves as such kind of buffer --
except that: 1. the mapping of multiple dimensions into the array index is
done in the matrix class itself, not here in the buffer, and 2. there is no
common virtual base class that abstracts from the different value element
types (double, COMPLEX, float, int). Discussion about the virtual base class
below.
> 'virtual' class LaGenMat (what does the 'Gen' stand for anyway?) -
> another virtual class to be inherited by LaGenMatDouble, etc.
Does not exist, because there is no virtual base class. The 'Gen' stands for
'general matrix' as opposed to a symmetric or banded or lower triangle etc
matrix. This is explained in the (old) Lapack++ User's Manual that is on the
original lapack++1.1 website.
> `virtual' class Vector - a subclass of LaGenMat and derive
> VectorDouble, etc. from that (alternatively, since most vector
> operations are matrix operations, scrap this virtual class and just derive
VectorDouble, etc..
> directly from the specific Matrix classes)
The vector classes are already directly derived from the Matrix classes, so
this is already implemented according to your idea.
> Alternatively, templates could be used (like TNT), but few people
> truly understand how compilers do things when it concerns templates,
> it would be difficult to code.
I'm not the guy to brag about my skills, but I'd say that I actually
understand templates almost fully (we use them all the time in a different
project). However, the whole point of Lapack++ is that it's
*only* "a C++ wrapper for the Fortran Lapack functions" -- no more, no less.
The template facilities of C++ are very nice, but the would be an extremely
bad mapping of the actual fortran lapack functions. Therefore they are quite
useless in this context.
Alternatively, one could start a whole C++ linear algebra package on its
own, which wouldn't be a wrapper to the fortran lapack anymore. This is what
TNT tries to do. You should give it a try. However, I tried them and found
that the performance was soooooo much worse than the fortran-lapack-based
routines. I mean, we are not talking about a factor of two or three here,
but rather by a factor of 100 when solving a system of linear equations.
This was a good enough argument for me to stick with the "C++ wrapper for
the fortran lapack" concept.
About a virtual base class: I guess you mean that there should be one common
base class for the different element types. However, which functions should
be found in the base class anyway? There are some where this is possible.
The most important one, operator(), cannot be declared in the base class,
because the return type depends on the element type.
This concept, "all these classes have the same function, only with different
return types", cannot be represented by inheritance. It could only be
represented by templates. Hm, maybe templates wouldn't be so bad after all
here... on the other hand, I think having a LaGenMat<double> instead of
LaGenMatDouble wouldn't be that much different from what we currently have.
The Blas_* functions would need to have different specialisations for each
element type, because they need to call different fortran functions. In that
respect templates wouldn't change much -- or what did you have in mind?
> Matrix classes should contain the SVD and Solve functions (keep the
> old syntax for a few versions); i.e,
>
> LaGenMatDouble A;
> A.SVD(U,D,VT);
>
> This last 'thought' may be more controversial, but why not incorporate
> some of the blas3 functionality into the matrix classes? Such as
> using
> operator* to call Blas_Mat_Mat_Mult(), etc?
Again the question is: What is the goal of lapack++? My goal is that
lapack++ is a C++ wrapper for the fortran lapack routines. Both these
ideas on the other hand are specific C++ additions that can be considered as
an "additional intelligence" in the C++ part. The question
is: Does one really gain that much by the addition of this intelligence in
the C++ code? Or do you rather introduce potential performance pitfalls? I
don't know how much textbooks about numerics you have read, but I gathered
from there that it's far from trivial how things like the
operator* can be implemented in a really efficient way. It is far too easy
to implement it in the trivial way, allocating lots of temporary copies and
so on, but then performance goes really down the drain. In the current
concept, the whole question of when and where to perform which matrix
operation is left up to the application. The application programmer has to
think about which temporary objects he needs and which he doesn't need. For
example, in a member function LaGenMatDouble::SVD there is the question what
to do with the matrix A. The fortran function DGESDD destroys the input
matrix. Should LaGenMatDouble::SVD therefore allocate a temporary copy
always? It is impossible to decide this automatically. Therefore I rather
stick to the fortran structure and leave it up to the application programmer
whether he runs DGESDD resp.
LaSVD on a matrix or on a temporary copy of it.
> With respect to the COMPLEX definition, I agree that it should be
> taken out; I find, at the moment, that if I try to compile anything
> (even just something that only uses LaGenMatDouble's), it will not
compile.
It doesn't? Maybe it should. I'm still unsure.
> Meanwhile, I'll just work on finishing up the row-wise imports; I
> haven't had time to get to them as I've been spending my time at work
> on other algorithms.
Ok. If you have code to commit, just go ahead.
Christian
-------------------------------------------------------
This SF.Net email is sponsored by OSTG. Have you noticed the changes on
Linux.com, ITManagersJournal and NewsForge in the past few weeks? Now, one
more big change to announce. We are now OSTG- Open Source Technology Group.
Come see the changes on the new OSTG site. www.ostg.com
_______________________________________________
lapackpp-devel mailing list
lap...@li...
https://lists.sourceforge.net/lists/listinfo/lapackpp-devel
|
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-11 00:09:53
|
I believe I solved the problem. The in the linslv.cc function, Xtmp should be a copy of B; however, at no point is an actual copy made. Xtmp is left at 0's before making the dgels call. I replaced line 219 with the line Xtmp=B; and it seems to fix the problem. The output result appears to be correct, at least for an AX=A style problem, where X should return the identity matrix. Although it actually returns I + some small values in the order of 10e-17; whereas CLAPACK returns 0's in these places for the same problem. Is this only something that displays with higher precision, or is there actually a 10e-17 error intruduced somewhere? Jack -----Original Message----- From: lap...@li... [mailto:lap...@li...] On Behalf Of Christian Stimming Sent: August 10, 2004 4:46 PM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] LaLinearSolve not totally exact Am Dienstag, 10. August 2004 21:38 schrieb Jacob \(Jack\) Gryn: > I tried it out again with both square and non-square matricies on both > complex and doubles: > > Results are the same for both Complex matricies and Doubles. > > With square matrices, the results are more viable. The equation AX=B, > with A and B being the same, X should return the identity matrix; > however it does not. Although through multiplying AX, the result still is B. > > With rectangular matrices, the results are the same in both complex > and doubles, but AX != B; X is usually all 0's, or very close to 0's > (10e-300). When you say AX != B, how large is the actual difference, that is D=(AX-B) and what is the Mat_Norm2(D)? If that's in fact in the region of 10e-15 or smaller, then you are only seeing some rounding effects due to the floating point representation -- I wouldn't consider this a problem. If you have matrices with numbers in normal regions, then simply consider anything below 10^-10 as zero and you're fine. Did I misunderstand something? Christian ------------------------------------------------------- SF.Net email is sponsored by Shop4tech.com-Lowest price on Blank Media 100pk Sonic DVD-R 4x for only $29 -100pk Sonic DVD+R for only $33 Save 50% off Retail on Ink & Toner - Free Shipping and Free Gift. http://www.shop4tech.com/z/Inkjet_Cartridges/9_108_r285 _______________________________________________ lapackpp-devel mailing list lap...@li... https://lists.sourceforge.net/lists/listinfo/lapackpp-devel |
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-10 21:07:55
|
The numbers aren't even close, either 0, or 1e-300. I tested, for example, the following. A= [1 1 1 3 3 1 3 2 7 3 2 7] X is a blank 3x3 matrix B=A= [1 1 1 3 3 1 3 2 7 3 2 7] The resulting X is all 0's; when it should be a 3x3 identity matrix. Am I possibly using the function wrong or something? PS. For testing, I've been doing a lot of copying/pasting into matlab and maple; it would be nice if a flag could be set to tell the operator<< call to output the display in various formats. Jack -----Original Message----- From: Christian Stimming [mailto:sti...@tu...] Sent: August 10, 2004 4:46 PM To: Jacob (Jack) Gryn Cc: lap...@li... Subject: Re: [Lapackpp-devel] LaLinearSolve not totally exact Am Dienstag, 10. August 2004 21:38 schrieb Jacob \(Jack\) Gryn: > I tried it out again with both square and non-square matricies on both > complex and doubles: > > Results are the same for both Complex matricies and Doubles. > > With square matrices, the results are more viable. The equation AX=B, > with A and B being the same, X should return the identity matrix; > however it does not. Although through multiplying AX, the result still is B. > > With rectangular matrices, the results are the same in both complex > and doubles, but AX != B; X is usually all 0's, or very close to 0's > (10e-300). When you say AX != B, how large is the actual difference, that is D=(AX-B) and what is the Mat_Norm2(D)? If that's in fact in the region of 10e-15 or smaller, then you are only seeing some rounding effects due to the floating point representation -- I wouldn't consider this a problem. If you have matrices with numbers in normal regions, then simply consider anything below 10^-10 as zero and you're fine. Did I misunderstand something? Christian |