Re: [Lapackpp-devel] RE: Lapack++ column ordering, derivations, exceptions, lwork size
Status: Beta
Brought to you by:
cstim
|
From: Christian S. <sti...@tu...> - 2004-08-06 09:04:54
|
Dear Jack,
(sorry, this turned out to be long)
Jacob (Jack) Gryn schrieb:
> The LaGenMatDouble class has an instance variable =91v=92 of type=20
> LaVectorDouble. I didn=92t notice inheritance in any of the vector or=20
> matrix classes.
There are two different vector classes in lapack (for each of=20
{double,COMPLEX,float}), and you're confusing the two here.
The instance variable 'v' in the LaGenMatDouble is of type=20
'VectorDouble' (without the 'La'!). That one is defined in vd.h and its=20
class comment says: "Lightwight vector class. (...) This vector is
not intended for mathematical denotations, but rather used as
building block for other LAPACK++ matrix classes."
On the other hand, the actual vector class that should be used by the=20
application is LaVectorDouble (with the La), defined in lavd.h, and that=20
one is actually a derivation of LaGenMatDouble. And it is enforced that=20
only this class is used and not the other, because all functions that=20
use vectors (Blas_Mat_Vec_Mult etc) only accept a LaVectorDouble and=20
*not* a VectorDouble. Therefore we should not touch VectorDouble at all=20
(except for internal optimizations), but we should only work with the=20
LaVectorDouble, and that one is already a derivation from the=20
LaGenMatDouble -- which is what you proposed.
> =91virtual=92 class Buffer =96 a virtual class of ANY kind of buffer, t=
o be=20
> inherited by DoubleBuffer, FloatBuffer, etc..
The abovementioned class VectorDouble serves as such kind of buffer --=20
except that: 1. the mapping of multiple dimensions into the array index=20
is done in the matrix class itself, not here in the buffer, and 2. there=20
is no common virtual base class that abstracts from the different value=20
element types (double, COMPLEX, float, int). Discussion about the=20
virtual base class below.
> =91virtual=92 class LaGenMat (what does the =91Gen=92 stand for anyway?=
) =96=20
> another virtual class to be inherited by LaGenMatDouble, etc=85 =20
Does not exist, because there is no virtual base class. The 'Gen' stands=20
for 'general matrix' as opposed to a symmetric or banded or lower=20
triangle etc matrix. This is explained in the (old) Lapack++ User's=20
Manual that is on the original lapack++1.1 website.
> `virtual=92 class Vector =96 a subclass of LaGenMat and derive VectorDo=
uble,=20
> etc. from that (alternatively, since most vector operations are matrix=20
> operations, scrap this virtual class and just derive VectorDouble, etc.=
.=20
> directly from the specific Matrix classes)
The vector classes are already directly derived from the Matrix classes,=20
so this is already implemented according to your idea.
> Alternatively, templates could be used (like TNT), but few people truly=
=20
> understand how compilers do things when it concerns templates, it would=
=20
> be difficult to code.
I'm not the guy to brag about my skills, but I'd say that I actually=20
understand templates almost fully (we use them all the time in a=20
different project). However, the whole point of Lapack++ is that it's=20
*only* "a C++ wrapper for the Fortran Lapack functions" -- no more, no=20
less. The template facilities of C++ are very nice, but the would be an=20
extremely bad mapping of the actual fortran lapack functions. Therefore=20
they are quite useless in this context.
Alternatively, one could start a whole C++ linear algebra package on its=20
own, which wouldn't be a wrapper to the fortran lapack anymore. This is=20
what TNT tries to do. You should give it a try. However, I tried them=20
and found that the performance was soooooo much worse than the=20
fortran-lapack-based routines. I mean, we are not talking about a factor=20
of two or three here, but rather by a factor of 100 when solving a=20
system of linear equations. This was a good enough argument for me to=20
stick with the "C++ wrapper for the fortran lapack" concept.
About a virtual base class: I guess you mean that there should be one=20
common base class for the different element types. However, which=20
functions should be found in the base class anyway? There are some where=20
this is possible. The most important one, operator(), cannot be declared=20
in the base class, because the return type depends on the element type.=20
This concept, "all these classes have the same function, only with=20
different return types", cannot be represented by inheritance. It could=20
only be represented by templates. Hm, maybe templates wouldn't be so bad=20
after all here... on the other hand, I think having a LaGenMat<double>=20
instead of LaGenMatDouble wouldn't be that much different from what we=20
currently have. The Blas_* functions would need to have different=20
specialisations for each element type, because they need to call=20
different fortran functions. In that respect templates wouldn't change=20
much -- or what did you have in mind?
> Matrix classes should contain the SVD and Solve functions (keep the old=
=20
> syntax for a few versions); i.e,
>=20
> LaGenMatDouble A;
> A.SVD(U,D,VT);
>=20
> This last =91thought=92 may be more controversial, but why not incorpor=
ate=20
> some of the blas3 functionality into the matrix classes? Such as using=
=20
> operator* to call Blas_Mat_Mat_Mult(), etc?
Again the question is: What is the goal of lapack++? My goal is that=20
lapack++ is a C++ wrapper for the fortran lapack routines. Both these=20
ideas on the other hand are specific C++ additions that can be=20
considered as an "additional intelligence" in the C++ part. The question=20
is: Does one really gain that much by the addition of this intelligence=20
in the C++ code? Or do you rather introduce potential performance=20
pitfalls? I don't know how much textbooks about numerics you have read,=20
but I gathered from there that it's far from trivial how things like the=20
operator* can be implemented in a really efficient way. It is far too=20
easy to implement it in the trivial way, allocating lots of temporary=20
copies and so on, but then performance goes really down the drain. In=20
the current concept, the whole question of when and where to perform=20
which matrix operation is left up to the application. The application=20
programmer has to think about which temporary objects he needs and which=20
he doesn't need. For example, in a member function LaGenMatDouble::SVD=20
there is the question what to do with the matrix A. The fortran function=20
DGESDD destroys the input matrix. Should LaGenMatDouble::SVD therefore=20
allocate a temporary copy always? It is impossible to decide this=20
automatically. Therefore I rather stick to the fortran structure and=20
leave it up to the application programmer whether he runs DGESDD resp.=20
LaSVD on a matrix or on a temporary copy of it.
> With respect to the COMPLEX definition, I agree that it should be taken=
=20
> out; I find, at the moment, that if I try to compile anything (even jus=
t=20
> something that only uses LaGenMatDouble=92s), it will not compile.
It doesn't? Maybe it should. I'm still unsure.
> Meanwhile, I=92ll just work on finishing up the row-wise imports; I=20
> haven=92t had time to get to them as I=92ve been spending my time at wo=
rk on=20
> other algorithms.
Ok. If you have code to commit, just go ahead.
Christian
|