RE: [Lapackpp-devel] RE: Lapack++ column ordering, derivations, exceptions, lwork size
Status: Beta
Brought to you by:
cstim
|
From: Jacob \(Jack\) G. <jg...@cs...> - 2004-08-11 20:32:31
|
Just to respond to some previous comments I haven't had a chance to before..
> About a virtual base class: I guess you mean that there should be one
common base class for the different element types. However, which
> functions should be found in the base class anyway? There are some where
this is possible. The most important one, operator(), cannot be
> declared in the base class, because the return type depends on the element
type.
> This concept, "all these classes have the same function, only with
different return types", cannot be represented by inheritance. It could
> only be represented by templates. Hm, maybe templates wouldn't be so bad
after all here... on the other hand, I think having a
> LaGenMat<double> instead of LaGenMatDouble wouldn't be that much different
from what we currently have. The Blas_* functions would need to
> have different specialisations for each element type, because they need to
call different fortran functions. In that respect templates
> wouldn't change much -- or what did you have in mind?
I think operator() could be done if the class hierarchy is setup properly.
If in, for example, a matrix class, instead of using specific VectorDouble,
VectorComplex, etc.. There is one base Buffer class, and there is one base
Matrix class, the operator() would call the base operator() that translates
the 2D coordinates into 1D; it then does the lookup by calling the virtual
operator() in the Buffer class.
Another example, may be operator<< . All of them are identical, except for
COMPLEX; however, we still only need one operator<< if COMPLEX is made into
a class with its own OPERATOR<<.
There are probably a few other operators that can be done.
> Again the question is: What is the goal of lapack++? My goal is that
lapack++ is a C++ wrapper for the fortran lapack routines. Both these
> ideas on the other hand are specific C++ additions that can be considered
as an "additional intelligence" in the C++ part. The question
> is: Does one really gain that much by the addition of this intelligence in
the C++ code? Or do you rather introduce potential performance
> pitfalls? I don't know how much textbooks about numerics you have read,
but I gathered from there that it's far from trivial how things like > the
operator* can be implemented in a really efficient way. It is far too easy
to implement it in the trivial way, allocating lots of
> temporary copies and so on, but then performance goes really down the
drain. In the current concept, the whole question of when and where to
> perform which matrix operation is left up to the application. The
application programmer has to think about which temporary objects he needs
> and which he doesn't need. For example, in a member function
LaGenMatDouble::SVD there is the question what to do with the matrix A. The
> fortran function DGESDD destroys the input matrix. Should
LaGenMatDouble::SVD therefore allocate a temporary copy always? It is
impossible to
> decide this automatically. Therefore I rather stick to the fortran
structure and leave it up to the application programmer whether he runs
> DGESDD resp. LaSVD on a matrix or on a temporary copy of it.
I see your point. Although, one may want the option of having the
simplicity of such operators, as long as their forewarned that such
'intelligence' comes at an expense. It should be noted that operator*= can
be done easily while overwriting the original matrix.
Jack
-----Original Message-----
From: lap...@li...
[mailto:lap...@li...] On Behalf Of Christian
Stimming
Sent: August 6, 2004 5:05 AM
To: Jacob (Jack) Gryn
Cc: lap...@li...
Subject: Re: [Lapackpp-devel] RE: Lapack++ column ordering, derivations,
exceptions, lwork size
Dear Jack,
(sorry, this turned out to be long)
Jacob (Jack) Gryn schrieb:
> The LaGenMatDouble class has an instance variable 'v' of type
> LaVectorDouble. I didn't notice inheritance in any of the vector or
> matrix classes.
There are two different vector classes in lapack (for each of
{double,COMPLEX,float}), and you're confusing the two here.
The instance variable 'v' in the LaGenMatDouble is of type 'VectorDouble'
(without the 'La'!). That one is defined in vd.h and its class comment says:
"Lightwight vector class. (...) This vector is not intended for mathematical
denotations, but rather used as building block for other LAPACK++ matrix
classes."
On the other hand, the actual vector class that should be used by the
application is LaVectorDouble (with the La), defined in lavd.h, and that one
is actually a derivation of LaGenMatDouble. And it is enforced that only
this class is used and not the other, because all functions that use vectors
(Blas_Mat_Vec_Mult etc) only accept a LaVectorDouble and
*not* a VectorDouble. Therefore we should not touch VectorDouble at all
(except for internal optimizations), but we should only work with the
LaVectorDouble, and that one is already a derivation from the LaGenMatDouble
-- which is what you proposed.
> 'virtual' class Buffer - a virtual class of ANY kind of buffer, to be
> inherited by DoubleBuffer, FloatBuffer, etc..
The abovementioned class VectorDouble serves as such kind of buffer --
except that: 1. the mapping of multiple dimensions into the array index is
done in the matrix class itself, not here in the buffer, and 2. there is no
common virtual base class that abstracts from the different value element
types (double, COMPLEX, float, int). Discussion about the virtual base class
below.
> 'virtual' class LaGenMat (what does the 'Gen' stand for anyway?) -
> another virtual class to be inherited by LaGenMatDouble, etc.
Does not exist, because there is no virtual base class. The 'Gen' stands for
'general matrix' as opposed to a symmetric or banded or lower triangle etc
matrix. This is explained in the (old) Lapack++ User's Manual that is on the
original lapack++1.1 website.
> `virtual' class Vector - a subclass of LaGenMat and derive
> VectorDouble, etc. from that (alternatively, since most vector
> operations are matrix operations, scrap this virtual class and just derive
VectorDouble, etc..
> directly from the specific Matrix classes)
The vector classes are already directly derived from the Matrix classes, so
this is already implemented according to your idea.
> Alternatively, templates could be used (like TNT), but few people
> truly understand how compilers do things when it concerns templates,
> it would be difficult to code.
I'm not the guy to brag about my skills, but I'd say that I actually
understand templates almost fully (we use them all the time in a different
project). However, the whole point of Lapack++ is that it's
*only* "a C++ wrapper for the Fortran Lapack functions" -- no more, no less.
The template facilities of C++ are very nice, but the would be an extremely
bad mapping of the actual fortran lapack functions. Therefore they are quite
useless in this context.
Alternatively, one could start a whole C++ linear algebra package on its
own, which wouldn't be a wrapper to the fortran lapack anymore. This is what
TNT tries to do. You should give it a try. However, I tried them and found
that the performance was soooooo much worse than the fortran-lapack-based
routines. I mean, we are not talking about a factor of two or three here,
but rather by a factor of 100 when solving a system of linear equations.
This was a good enough argument for me to stick with the "C++ wrapper for
the fortran lapack" concept.
About a virtual base class: I guess you mean that there should be one common
base class for the different element types. However, which functions should
be found in the base class anyway? There are some where this is possible.
The most important one, operator(), cannot be declared in the base class,
because the return type depends on the element type.
This concept, "all these classes have the same function, only with different
return types", cannot be represented by inheritance. It could only be
represented by templates. Hm, maybe templates wouldn't be so bad after all
here... on the other hand, I think having a LaGenMat<double> instead of
LaGenMatDouble wouldn't be that much different from what we currently have.
The Blas_* functions would need to have different specialisations for each
element type, because they need to call different fortran functions. In that
respect templates wouldn't change much -- or what did you have in mind?
> Matrix classes should contain the SVD and Solve functions (keep the
> old syntax for a few versions); i.e,
>
> LaGenMatDouble A;
> A.SVD(U,D,VT);
>
> This last 'thought' may be more controversial, but why not incorporate
> some of the blas3 functionality into the matrix classes? Such as
> using
> operator* to call Blas_Mat_Mat_Mult(), etc?
Again the question is: What is the goal of lapack++? My goal is that
lapack++ is a C++ wrapper for the fortran lapack routines. Both these
ideas on the other hand are specific C++ additions that can be considered as
an "additional intelligence" in the C++ part. The question
is: Does one really gain that much by the addition of this intelligence in
the C++ code? Or do you rather introduce potential performance pitfalls? I
don't know how much textbooks about numerics you have read, but I gathered
from there that it's far from trivial how things like the
operator* can be implemented in a really efficient way. It is far too easy
to implement it in the trivial way, allocating lots of temporary copies and
so on, but then performance goes really down the drain. In the current
concept, the whole question of when and where to perform which matrix
operation is left up to the application. The application programmer has to
think about which temporary objects he needs and which he doesn't need. For
example, in a member function LaGenMatDouble::SVD there is the question what
to do with the matrix A. The fortran function DGESDD destroys the input
matrix. Should LaGenMatDouble::SVD therefore allocate a temporary copy
always? It is impossible to decide this automatically. Therefore I rather
stick to the fortran structure and leave it up to the application programmer
whether he runs DGESDD resp.
LaSVD on a matrix or on a temporary copy of it.
> With respect to the COMPLEX definition, I agree that it should be
> taken out; I find, at the moment, that if I try to compile anything
> (even just something that only uses LaGenMatDouble's), it will not
compile.
It doesn't? Maybe it should. I'm still unsure.
> Meanwhile, I'll just work on finishing up the row-wise imports; I
> haven't had time to get to them as I've been spending my time at work
> on other algorithms.
Ok. If you have code to commit, just go ahead.
Christian
-------------------------------------------------------
This SF.Net email is sponsored by OSTG. Have you noticed the changes on
Linux.com, ITManagersJournal and NewsForge in the past few weeks? Now, one
more big change to announce. We are now OSTG- Open Source Technology Group.
Come see the changes on the new OSTG site. www.ostg.com
_______________________________________________
lapackpp-devel mailing list
lap...@li...
https://lists.sourceforge.net/lists/listinfo/lapackpp-devel
|