From: Andras V. <and...@ui...> - 2012-09-01 16:58:24
|
Hi Michael, Thanks for your mail. Not only in the past, but also presently C++QED optionally depends on FLENS (recently we've even prepared some ubuntu packages, cf. https://launchpad.net/~raimar-sandner/+archive/cppqed). All the eigenvalue calculations of the framework are done with LAPACK *through* FLENS, and if the FLENS dependency is not satisfied, then these tracts of the framework get disabled. (Earlier, we used to use our own little C++ interface to LAPACK, but soon gave it up -- you of all people probably understand why.) However, it is not your new FLENS/LAPACK implementation, but your old project that we use, namely the 2011/09/08 version from CVS. For months now, we've been aware of your new project, and it looks very promising to us. The reason why we think we cannot switch at the moment is that our project is first of all an application programming framework, in which end-users are expected to write and compile their C++ applications in the problem domain of open quantum systems. This means that when they compile, they also need to compile the FLENS parts that they actually use. Now, as far as I understand, the new FLENS/LAPACK uses c++11 features, which at the moment severely limits the range of suitable compilers. But we are definitely looking forward to be able to switch, and are glad that the project is well alive! Best regards, András On Sat, Sep 1, 2012 at 2:26 AM, Michael Lehn <mic...@un...> wrote: > Hi, > > I am a developer of FLENS. Browsing the web I saw that C++QED was using FLENS > for some optional modules in the past. There was quite a list I always wanted to > improve in FLENS. Unfortunately other projects were consuming too much time. > Last year I finally found time. If you are still interested in using LAPACK/BLAS > functionality from within a C++ project some of its features might be interesting > of you: > > 1) FLENS is header only. It comes with generic BLAS implementation. By "generic" I > mean template functions. However, for large problem sizes you still have to use > optimized BLAS implementations like ATLAS, GotoBLAS, ... > > 2) FLENS comes with a bunch of generic LAPACK function. Yes, we actually re-implemented > quite a number of LAPACK functions. The list of supported LAPACK function include LU, > QR, Cholesky decomposition, eigenvalue/-vectors computation, Schur factorization, etc. > Here an overview of FLENS-LAPACK: > > http://www.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html > > These LAPACK ports are more then just a reference implementation. It provides the same > performance as Netlib's LAPACK. But much more important we carefully ensure that it > also produces exactly the same results as LAPACK. Note that LAPACK is doing some cool > stuff to ensure stability and high precision of its result while providing excellent > performance at the same time. Consider the caller graph of LAPACK's routine dgeev which > computes the eigenvalues/-vectors of a general matrix: > > http://www.mathematik.uni-ulm.de/~lehn/dgeev.pdf > > The routine is pretty sophisticated and for example switches forth and back between > different implementations of the QR-method (dlaqr0, dlaqr1, dlaqr2, ...). By this > the routine even converges for critical cases. We ported all these functions one-by-one. > And tested them as follows: > (i) On entry we make copies of all arguments. > (ii) (a) we call our port > (ii) (b) we call the original LAPACK function > (iii) We compare the results. > If a single-threaded BLAS implementation is used we even can reproduce the same roundoff > errors (i.e. we do the comparison bit-by-bit). > > 3) If you use FLENS-LAPACK with ATLAS or GotoBLAS as BLAS backend then you are on par > with LAPACK from MKL or ACML. Actually MKL and ACML just use the Netlib LAPACK and optimize > a few functions (IMHO there's a lot of marketing involved). > > 4) You still can use an external LAPACK function (if FLENS-LAPACK does not provide a needed > LAPACK port you even have to). > > 5) Porting LAPACK to FLENS had another big advantage: It was a great test for our matrix and > vector classes: > (i) We now can prove that our matrix/vector classes do not introduce any performance penalty. > You get the same performance as you get with plain C or Fortran. > (ii) Feature-completeness of matrix/vector classes. We at least know that our matrix/vector > classes allow a convenient implementation of all the algorithms in LAPACK. An in our > opinion the FLENS-LAPACK implementation also looks sexy. > > I hope that I could tease you a little to have a look at FLENS on http://flens.sf.net > > FLENS is back, > > Michael > > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Cppqed-support mailing list > Cpp...@li... > https://lists.sourceforge.net/lists/listinfo/cppqed-support |