You can subscribe to this list here.
2009 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}

_{Nov}

_{Dec}


2010 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(2) 
_{Jun}

_{Jul}
(6) 
_{Aug}
(1) 
_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}

2011 
_{Jan}

_{Feb}

_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2013 
_{Jan}
(1) 
_{Feb}
(3) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(7) 
_{Feb}
(2) 
_{Mar}
(5) 
_{Apr}

_{May}
(1) 
_{Jun}
(1) 
_{Jul}

_{Aug}
(5) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

From: Marc Durufle <marc.durufle@in...>  20140816 21:08:47

Hi, I have updated the documentation about eigenvalue solvers : http://www.math.ubordeaux1.fr/~durufle/seldon/eigenvalue.php I explain in this page how the example eigenvalue_test.cpp can be compiled with arpack, anasazi or feast (the solvers interfaced in Seldon). I have updated the interface with anasazi and added the file eigenvalue_test.cpp. They are available in my git repository and in the archive (which is just a mirror of the files in the git repository): http://www.math.ubordeaux1.fr/~durufle/seldon.tar.gz On my computer, the three different eigensolvers are working fine for the example file. Best regards. 
From: Sumit K. Nath <skn123@gm...>  20140813 10:23:01

Marc I was wondering if you had an example to interfacing Seldon with ARPACK and ANASAZI. I think that was what Google detected when I gave it a search term. On Wed, Aug 13, 2014 at 1:37 PM, Marc Durufle <marc.durufle@...> wrote: > Hi, > I send you the files eigenvalue_test.cc (which is a unitary test file for > the computation of eigenvalue of dense matrices) and > eigenvalue_solver_test.cc (which is dedicated to sparse matrices). Only the > second file is currently using the functions implemented in > EigenvalueSolver.hxx, the first file is using the Lapack interface > (Lapack_Eigenvalue.cxx). > Best regards. > > 
From: Marc Durufle <marc.durufle@in...>  20140813 08:07:32

From: Vivien Mallet <dev@vi...>  20140813 00:18:30

Hello, "Sumit K. Nath" <skn123@...> writes: > I was unable to find this file listed in the following page > http://seldon.sourceforge.net/doc5.2/eigenvalue.php The example file "test/program/eigenvalue_test.cpp" is missing. So are "EigenvalueSolver.hxx" and "EigenvalueSolver.cxx". I am not sure what happened in this version (5.2) of Seldon. The documentation is ahead of the source code. > Is there some other place I should be looking at? I cannot find the example file "test/program/eigenvalue_test.cpp", but all functions may be found in the following Git repository: git://gitorious.org/seldon/marcduruflessrc.git In "computation/interfaces/eigenvalue/", you will find "EigenvalueSolver.hxx" and "EigenvalueSolver.cxx". This repository is being merged into the reference repository, so that next stable version should include them. Marc Duruflé implemented these and probably has the example file "test/program/eigenvalue_test.cpp". Marc, could you send it and also commit it in your repository? Thanks. Best regards, Vivien Mallet. 
From: Sumit K. Nath <skn123@gm...>  20140812 15:32:36

Hello I was unable to find this file listed in the following page http://seldon.sourceforge.net/doc5.2/eigenvalue.php Is there some other place I should be looking at? Thanks and regards 
From: Marc Durufle <marc.durufle@in...>  20140605 09:35:18

Hi, Currently Seldon is mainly interface with Arpack eigenvalue solver. I plan to interface it with other eigenvalue solvers by using the same approach, such that you could declare an object SparseEigenProblem and select the eigenvalue solver. Usually I am dealing with real symmetric mass matrices, and I didn't not test extensively the interface for other types of mass matrices, and it turned out that the interface wasn't working for any type of mass matrix. I have corrected this misbehaviour, and I have tested with all the combinations of matrices (the mass matrix and stiffness matrix can be either real, complex, symmetric or unsymmetric), it works fine now. If you want to test the corrections, you can download my repository https://www.gitorious.org/seldon/marcduruflessrc/ As you have noticed it, the SparseEigenProblem is defined with the following set of parameters : template<class T, class MatStiff, class MatMass = Matrix<double, Symmetric, ArrayRowSymSparse>> In this declaration, I have set a default value for the parameter MatMass such that if you instantiate an object by : SparseEigenProblem<double, Matrix<double, General, ArrayRowSparse> > var_eig; It will consider that the mass matrix is real symmetric (which is an usual feature). Moreover, if the mass matrix is equal to the identity, you can type : var_eig.InitMatrix(K); where K is the stiffness matrix, and a call to GetEigenvaluesEigenvectors will compute the eigenvalues of the stiffness matrix K (since M = identity). So it seems logical to keep the fact that the type of the mass matrix is an "optional" parameter. For example, you can make it explicit by declaring : SparseEigenProblem<double, Matrix<complex<double>, General, ArrayRowSparse>, Matrix<complex<double>, General, ArrayRowSparse> > var_eig; In that case, both stiffness and mass matrices are complex unsymmetric matrices. the matrices have to been provided with InitMatrix : var_eig.InitMatrix(K, M); and GetEigenvaluesEigenvectors will compute eigenvalues of M^{1} K. I would like to recall the computational modes (which can be read in the documentation of Arpack) : REGULAR_MODE : in that mode, you can only compute large eigenvalues of M^1 K where M is a real positive symmetric matrix (it is a requirement of Arpack subroutines) SHIFTED_MODE : in that mode, you can only compute eigenvalues close to a given value sigma (called the shift), M is again a real positive symmetric matrix BUCKLING_MODE : in that mode you can only compute eigenvalues close to the shift, K and M must be real symmetric positive CAYLEY_MODE : in that mode you can only compute eigenvalues close to the shift, K is real symmetric and M must be real symmetric positive When M is diagonal, you can solve a standard eigenvalue problem by considering matrix M^1/2 K M^1/2. In order to achieve that, you just have to mention : var_eig.SetDiagonalMass(); In that case, the diagonal is extracted from the sparse matrix, the square root computed, and the arpack subroutine solving standard eigenvalue problem is called. The same kind of functionalities is available if you want to exploit the Cholesky factorisation of M (the Cholesky factorisation will be computed by calling Cholmod), in that case we consider the matrix L^1 K L^T where M = L L^T. In order to solve a standard eigenvalue problem by using the cholesky factorisation of a real positive symmetric matrix, you have to mention : var_eig.SetCholeskyFactoForMass() As you see, in all the Arpack subroutines, M is always considered as a positive symmetric matrix (real or hermitian, but in the interface I have dropped the hermitian case, because hermitian sparse matrices are not implemented in Seldon). In order to consider other types of mass matrices, I had implemented an additional mode : INVERT_MODE : in that mode, you can compute small, large or clustered eigenvalues of M^1 K for any matrix M and K In that mode, I am always calling an Arpack subroutine for standard unsymmetric eigenvalue problem (since M^1 K is unsymmetric even though M and K are symmetric). So in the case where M and K are symmetric, this mode is usually less efficient than other modes. I just hadn't tested the invert mode when M is not a real symmetric matrix. Now, the code is corrected, so you can use this mode to solve your generalized eigenvalue problem : var_eig.SetComputationalMode(var_eig.INVERT_MODE); Best regards. 
From: James Cook <j.cook@wa...>  20140522 15:50:37

Hello Seldon, I was wondering if there's any way of calculating the eigenvalues+vectors of the generalized complex non symmetric eigenvalues problem where the both the stiff and mass matrices are complex? I see that class Seldon::SparseEigenProblem< T, MatStiff, MatMass > is defined as: template<class T, class MatStiff, class MatMass = Matrix<double, Symmetric, ArrayRowSymSparse>> rathar than template<class T, class MatStiff, class MatMass> where class MatMass = Matrix<complex<double>, General, ArrayRowSymSparse>>. I suppose that there are reasons why this is the case. Are there plans to make the Mass matrix type more general? Thanks, James  James W S Cook, +44 (0) 24 7657 3874 PS 120, Physical Science Building CFSA, Dept. of Physics, University of Warwick, UK. 
From: Marc Durufle <marc.durufle@in...>  20140329 16:15:38

Hi, actually SuperLU is using Blas functions, which is attractive because some Blas libraries (for example MKL) are optimized especially for Blas Level 3 routines. As a result, the solution of the linear system is more efficient. In order to compile with blas, you need to link with blas for instance : g++ I../.. DSELDON_WITH_SUPERLU direct_test.cpp I/Users/jonas/Dropbox/cpp/libraries/SuperLU_4.3/SRC L/usr/local/lib lsuperlu Lblas_directory lblas Lcblas_directory lcblas When I compiled with this command line, it works for my configuration. Blas and Cblas can be downloaded in netlib.org, they can be compiled with gfortran. If you want to use Blas/Cblas functions contained in the MKL, you have to type : g++ I../.. DSELDON_WITH_SUPERLU direct_test.cpp I/Users/jonas/Dropbox/cpp/libraries/SuperLU_4.3/SRC L/usr/local/lib lsuperlu L/opt/intel/mkl/lib/ia32 lmkl_gf lmkl_gnu_thread lmkl_core fopenmp In that case, you use the multithreaded MKL library, so it should run faster if your machine has many cores. If you have still problems, you can report them. Best regards. 
From: Durufle marc <marc_durufle@ya...>  20140329 16:04:38

Hi, actually SuperLU is using Blas functions, which is attractive because some Blas libraries (for example MKL) are optimized especially for Blas Level 3 routines. As a result, the solution of the linear system is more efficient. In order to compile with blas, you need to link with blas for instance : g++ I../.. DSELDON_WITH_SUPERLU direct_test.cpp I/Users/jonas/Dropbox/cpp/libraries/SuperLU_4.3/SRC L/usr/local/lib lsuperlu Lblas_directory lblas Lcblas_directory lcblas When I compiled with this command line, it works for my configuration. Blas and Cblas can be downloaded in netlib.org, they can be compiled with gfortran. If you want to use Blas/Cblas functions contained in the MKL, you have to type : g++ I../.. DSELDON_WITH_SUPERLU direct_test.cpp I/Users/jonas/Dropbox/cpp/libraries/SuperLU_4.3/SRC L/usr/local/lib lsuperlu L/opt/intel/mkl/lib/ia32 lmkl_gf lmkl_gnu_thread lmkl_core fopenmp In that case, you use the multithreaded MKL library, so it should run faster if your machine has many cores. If you have still problems, you can report them. Best regards. 
From: Jonás Arias <jonasarias@gm...>  20140328 16:02:41

Hello, I am trying to use Seldon to solve a linear system with sparse matrices using SUPERLU. I installed SUPERLU successfully (following http://www.math.ubordeaux1.fr/~durufle/seldon/direct.php), but when I tried to compile the example direct_test.cpp from the command line using: *g++ I../.. DSELDON_WITH_SUPERLU direct_test.cpp I/Users/jonas/Dropbox/cpp/libraries/SuperLU_4.3/SRC L/usr/local/lib lsuperlu* I obtain the error described below. I was wondering if there is a solution to this problem. Best wishes, Jonas ERROR: Undefined symbols for architecture x86_64: "_cblas_dnrm2", referenced from: double Seldon::Norm2<Seldon::MallocAlloc<double> >(Seldon::Vector<double, Seldon::VectFull, Seldon::MallocAlloc<double> > const&) in ccUKfsup.o "_cblas_drotg", referenced from: Seldon::GenRot(double&, double&, double&, double&) in ccUKfsup.o "_cblas_drotmg", referenced from: Seldon::GenModifRot(double&, double&, double&, double const&, double*) in ccUKfsup.o "_cblas_dscal", referenced from: void Seldon::Mlt<Seldon::MallocAlloc<double> >(double, Seldon::Vector<double, Seldon::VectFull, Seldon::MallocAlloc<double> >&) in ccUKfsup.o "_cblas_dznrm2", referenced from: double Seldon::Norm2<Seldon::MallocAlloc<std::complex<double> > >(Seldon::Vector<std::complex<double>, Seldon::VectFull, Seldon::MallocAlloc<std::complex<double> > > const&) in ccUKfsup.o "_cblas_srotg", referenced from: Seldon::GenRot(float&, float&, float&, float&) in ccUKfsup.o "_cblas_srotmg", referenced from: Seldon::GenModifRot(float&, float&, float&, float const&, float*) in ccUKfsup.o "_cblas_zscal", referenced from: void Seldon::Mlt<Seldon::MallocAlloc<std::complex<double> > >(std::complex<double>, Seldon::Vector<std::complex<double>, Seldon::VectFull, Seldon::MallocAlloc<std::complex<double> > >&) in ccUKfsup.o "_dgemm_", referenced from: _dgstrs in libsuperlu.a(dgstrs.o) "_dgemv_", referenced from: _dcolumn_bmod in libsuperlu.a(dcolumn_bmod.o) _dpanel_bmod in libsuperlu.a(dpanel_bmod.o) _dsnode_bmod in libsuperlu.a(dsnode_bmod.o) _sp_dtrsv in libsuperlu.a(dsp_blas2.o) "_dtrsm_", referenced from: _dgstrs in libsuperlu.a(dgstrs.o) "_dtrsv_", referenced from: _dcolumn_bmod in libsuperlu.a(dcolumn_bmod.o) _dpanel_bmod in libsuperlu.a(dpanel_bmod.o) _dsnode_bmod in libsuperlu.a(dsnode_bmod.o) _sp_dtrsv in libsuperlu.a(dsp_blas2.o) "_zgemm_", referenced from: _zgstrs in libsuperlu.a(zgstrs.o) "_zgemv_", referenced from: _sp_ztrsv in libsuperlu.a(zsp_blas2.o) _zcolumn_bmod in libsuperlu.a(zcolumn_bmod.o) _zpanel_bmod in libsuperlu.a(zpanel_bmod.o) _zsnode_bmod in libsuperlu.a(zsnode_bmod.o) "_ztrsm_", referenced from: _zgstrs in libsuperlu.a(zgstrs.o) "_ztrsv_", referenced from: _sp_ztrsv in libsuperlu.a(zsp_blas2.o) _zcolumn_bmod in libsuperlu.a(zcolumn_bmod.o) _zpanel_bmod in libsuperlu.a(zpanel_bmod.o) _zsnode_bmod in libsuperlu.a(zsnode_bmod.o) ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status 
From: Marc Durufle <marc.durufle@in...>  20140317 21:21:47

Hi, you can set the stopping criterion in the iteration object. The classical way is to use the constructor : Iteration<double> iter(nb_max_iteration, 1e19); BiCgStab(A, x, b, precond, iter); You can also set this field later if you prefer : Iteration<double> iter; iter.SetTolerance(1e19); BiCgStab(A, x, b, precond, iter); The default value is 1e6 as you noticed it. If you are using a double precision accuracy, the algorithm should not converge below 1e14/1e15, so 1e19 is rather a stopping criterion appropriate for higher precision, such as quadruple precision. Concerning, the parallelization issue, the BiCgStab algorithm implemented is implemented in sequential, no optimization has been applied on the algorithm, the code is easily readable in file computation/solver/iterative/BiCgStab.cxx. However, I already used this sequential solver in parallel by using MPI interface. This kind of procedure can be achieved by providing distributed vectors (for A, x and b) and distributed matrices instead of sequential vectors/matrices. When I provide distributed vectors and matrices, the functions Mlt (for the matrix vector product), DotProd, DotProdConj and Norm2 (for scalar products) are overloaded for these structures such that the BiCgStab runs successfully in parallel. I don't know why you obtain a good performance with the implementation of BiCgStab provided in Seldon. It might be explained by the use of MKL functions to compute matrixvector products, sums of vectors and scalar products. As you can notice, the algorithm is written by using functions Add, DotProd, Mlt, etc. For dense matrices and vectors these functions are calling BLAS subroutines, and when these routines are provided by the MKL library (mutlithreaded library), the BiCgStab will benefit from the mulithreaded implementation of MKL. In my opinion, you should check the state of threads when launching your own solver and Seldon solver, to see how they exploit the cores of the machine. It is also possible that your own solver does launch threads on the same core, thus decreases drastically the efficiency instead of increasing it. Good luck, best regards. 
From: Samar Vafai <samar.vafai@st...>  20140310 11:57:56

Dear Sir/Madam, I use the Seldon BiCgStab iterative solver in my code and I got very good performance compare to the same iterative solver coded by myself. I implement OpenMP API and SSE registers to speed up the solver, but your solver is still much more faster than mine. I would like to ask you how you perform these computations to get high performance? Have you done any parallelization or vectorization? In addition, in your case, the solver converges for the tolerance of 1e6, but in my case to have the convergence I need to set the tolerance equal to 1e19 and I don't know where the difference comes from when both solvers do the same computations? I couldn't find any information related to this issue in your site. It would be very grateful for me if you can help me in this ay. Thank you in advance and look forward to hearing from you. Samar 
From: Marc Durufle <marc.durufle@in...>  20140224 08:53:47

Hi, the syntax for user defined precondioning is present in the file computation/solver/iterative/Iterative.hxx : //! Base class for preconditioners class Preconditioner_Base { public : Preconditioner_Base(); // solving M z = r template<class Matrix1, class Vector1> void Solve(const Matrix1& A, const Vector1 & r, Vector1 & z); // solving M^t z = r template<class Matrix1, class Vector1> void TransSolve(const Matrix1& A, const Vector1& r, Vector1 & z); }; Here the class Preconditioner_Base is the identity preconditioning, it merely does z = r for both functions. For your own preconditioning, you can write for example : class MyPreconditioning { public : // solving M z = r template<class Matrix1, class Vector1> void Solve(const Matrix1& A, const Vector1 & r, Vector1 & z) { // you apply the preconditiong to vector r, the result vector is z // the matrix A is given in the case where you need this matrix in order to apply the preconditioning } // solving M^t z = r template<class Matrix1, class Vector1> void TransSolve(const Matrix1& A, const Vector1& r, Vector1 & z) { // you apply the transpose of the preconditioning to vector r, the result vector is z } }; For the transpose preconditioning, it is needed for algorithms involving the transpose matrix (e.g. BiCg, Qmr). There are other algorithms that don't need the transpose matrix, they don't need the transpose preconditioning as a result (e.g. Gmres, TfQmr). Once your preconditioning is implemented, it is used as follows: Iterator<double> iter; // class storing the stopping criteria Matrix<double> A; // linear system to solve Vector<double> x, b; // solution and right hand side MyPreconditioning prec; // to see the residual at each iteration : iter.ShowFullHistory(); // then you launch an iterative algorithm, for example the conjugate gradient Cg(A, x, b, prec, iter); // x should contain the solution 
From: Yaniel <yaniel@ie...>  20140221 08:15:12

Dear Seldon group, I am a PhD student from the University of Hong Kong. I appreciate the Seldon C++ library for linear algebra, which I'm using in my research. Here I want to ask you about the preconditioner for the iterative solver. SOR preconditioner is now available in this package. However, a user defined preconditioner is of great demand somehow. I tried to add my preconditioner in the same way as SOR preconditioner, but it is not so clear right now. Could you tell me how to achieve this? BRs, Yanlin Li 
From: Marc Durufle <marc.durufle@in...>  20140129 10:13:34

Hi, 1 By default, the iterative solvers set x to zero before starting the iterative algorithm. If you want to provide an initial guess, you can do that : int nb_max_iteration = 100; double epsilon = 1e6; Iteration<double> iter(nb_max_iteration, epsilon); // this is the most important line here // with this line iterative solves won't set the initial guess to 0, they will keep it as provided by the user iter.SetInitGuess(false); // then you can call your iterative solver, e.g. BiCgStab BiCgStab(A, x, b, prec, iter); 2 For the segmentation fault appearing in ILUT, I suggest you to recompile in the debug mode (with DSELDON_DEBUG_LEVEL_4 in the compilation line), and run the code with gdb such that you will see when the problem occurs and how to fix it. Best regards. 
From: Vivien Mallet <dev@vi...>  20140129 01:53:19

Hello, Wolfram Ruehaak <w.ruehaak@...> writes: > just a quick question: is seldon supporting parallel computations? I.e. > using open mp, mpi or simple multithreading? Indirectly as Seldon relies on thirdparty libraries for parallel computing (Pastix, MUMPS, PETSc). Best regards, Vivien Mallet. 
From: Vivien Mallet <Vivien.M<allet@in...>  20140129 01:48:30

Hello, Dragos Anastasiu <dragosanastasiu@...> writes: > I am looking to build a Data Mining/Machine Learning library and, for now, use > a preexisting C++ Linear Algebra library. I've been scouring the Net for a > while and have come across Seldon, and a few of its competitors, notably > Flens, Armadillo, and Eigen. I was wondering if you are aware of any recent > Benchmarks using data of diverse sizes between Seldon and any of these other > libraries. Yes, there is this study that was conducted for the data assimilation library Verdandi [1]: http://verdandi.gforge.inria.fr/doc/linear_algebra_libraries.pdf Best regards, Vivien Mallet. [1] http://verdandi.gforge.inria.fr/ 
From: Dragos Anastasiu <dragosanastasiu@ya...>  20140128 15:35:18

Hi, I am looking to build a Data Mining/Machine Learning library and, for now, use a preexisting C++ Linear Algebra library. I've been scouring the Net for a while and have come across Seldon, and a few of its competitors, notably Flens, Armadillo, and Eigen. I was wondering if you are aware of any recent Benchmarks using data of diverse sizes between Seldon and any of these other libraries. Thanks, David 
From: Wolfram Ruehaak <w.ruehaak@gm...>  20140117 22:32:59

Dear all just a quick question: is seldon supporting parallel computations? I.e. using open mp, mpi or simple multithreading? Kind regards Wolfram 
From: Mohammad Elmi <elmi.moh@gm...>  20140112 13:14:26

Hello, I am using Seldon with BiCgStab. I had some questions: 1. How can I set an initial guess? Whatever I set x vector, Seldon solves the system in a constant number of iterations. 2. When I set an ILUT preconditioner for the solver, I get a Segmentation Fault error. Thank you, Elmi 
From: Mohammad Elmi <elmi.moh@gm...>  20140112 12:49:37

Hello, I am using Seldon with BiCgStab. I had some questions: 1. How can I set an initial guess? Whatever I set x vector, Seldon solves the system in a constant number of iterations. 2. When I set an ILUT preconditioner for the solver, I get a Segmentation Fault error. Thank you, Elmi 
From: Vivien Mallet <Vivien.M<allet@in...>  20130218 21:37:49

Hello, Diego Gallardo <gallardo.d.e@...> writes: > I came across Seldon only very recently and decided to give it a go. I'm > trying to compile one of your examples, "iterative_test.cpp", in MSVC Express > 2010. > > Unfortunately the compiler spits out a lot of error messages, all of them > related to templates. I will focus on one of them, for the sake of argument. > The error message happens in "gmres.cxx", and says: > > 1>d:\seldon\seldon5.1.2\computation\solver\iterative\gmres.cxx(174): error > C2784: 'void Seldon::Solve(Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> > &,Seldon::Vector<T,Seldon::VectFull,Allocator1> &)' : could not deduce > template argument for 'Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> &' > from 'Seldon::Matrix<T,Prop,Storage>' > 1> with > 1> [ > 1> T=Complexe, > 1> Prop=Seldon::General, > 1> Storage=Seldon::ColUpTriang > 1> ] > 1> > d:\seldon\seldon5.1.2\computation\interfaces\direct\sparsesolver.hxx(39) : > see declaration of 'Seldon::Solve' > 1> c:\users\dg1\documents\visual studio > 2010\projects\bemgen\bemgen\main.cpp(202) : see reference to function template > instantiation 'int > Seldon::Gmres<double,BlackBoxMatrix<T>,Seldon::DVect,Seldon::Preconditioner_Base> > (MatrixSparse &,Vector1 &,const Vector1 &,Preconditioner > &,Seldon::Iteration<Titer> &)' being compiled > 1> with > 1> [ > 1> T=double, > 1> MatrixSparse=BlackBoxMatrix<double>, > 1> Vector1=Seldon::DVect, > 1> Preconditioner=Seldon::Preconditioner_Base, > 1> Titer=double > 1> ] > > The line in question is: > > Solve(H, s); > > Now, H is defined as: > > Matrix<Complexe, General, ColUpTriang> H(m+1,m+1); > > But for some reason the compiler seems to be trying to use > > Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> > > instead of > Seldon::Matrix<T,Prop,Seldon::ColUpTriang,Allocator> > > I cannot make it work. The rest of the error messages are similar, involving a > confussion between RowSparse/ColSparse and ColUpTriang. Any suggestions? I cannot reproduce the problem. You refer to "iterative_test.cpp", but the errors seem to refer to another file ("bemgen\main.cpp") and to a line that is not in "iterative_test.cpp" ("Solve(H, s)"). Could you send a minimal program that does not compile? Thank you, Vivien Mallet. 
From: Diego Gallardo <gallardo.e@gm...>  20130212 10:12:54

Dear Seldon development team, I'm not sure if there's anybony supporting this library anymore, but have decided to drop you an email just in case. I came across Seldon only very recently and decided to give it a go. I'm trying to compile one of your examples, "iterative_test.cpp", in MSVC Express 2010. Unfortunately the compiler spits out a lot of error messages, all of them related to templates. I will focus on one of them, for the sake of argument. The error message happens in "gmres.cxx", and says: 1>d:\seldon\seldon5.1.2\computation\solver\iterative\gmres.cxx(174): error C2784: 'void Seldon::Solve(Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> &,Seldon::Vector<T,Seldon::VectFull,Allocator1> &)' : could not deduce template argument for 'Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> &' from 'Seldon::Matrix<T,Prop,Storage>' 1> with 1> [ 1> T=Complexe, 1> Prop=Seldon::General, 1> Storage=Seldon::ColUpTriang 1> ] 1> d:\seldon\seldon5.1.2\computation\interfaces\direct\sparsesolver.hxx(39) : see declaration of 'Seldon::Solve' 1> c:\users\dg1\documents\visual studio 2010\projects\bemgen\bemgen\main.cpp(202) : see reference to function template instantiation 'int Seldon::Gmres<double,BlackBoxMatrix<T>,Seldon::DVect,Seldon::Preconditioner_Base>(MatrixSparse &,Vector1 &,const Vector1 &,Preconditioner &,Seldon::Iteration<Titer> &)' being compiled 1> with 1> [ 1> T=double, 1> MatrixSparse=BlackBoxMatrix<double>, 1> Vector1=Seldon::DVect, 1> Preconditioner=Seldon::Preconditioner_Base, 1> Titer=double 1> ] The line in question is: Solve(H, s); Now, H is defined as: Matrix<Complexe, General, ColUpTriang> H(m+1,m+1); But for some reason the compiler seems to be trying to use Seldon::Matrix<T,Prop,Seldon::RowSparse,Allocator> instead of Seldon::Matrix<T,Prop,Seldon::ColUpTriang,Allocator> I cannot make it work. The rest of the error messages are similar, involving a confussion between RowSparse/ColSparse and ColUpTriang. Any suggestions? Regards Diego 
From: Vivien Mallet <dev@vi...>  20130204 20:46:04

Hello, Sebastien Gilles <sebastien.gilles@...> writes: > I didn't manage to compile test/program (and I still don't, but that's not my > point at the moment). The issue was that the libraries I give in the > SConstruct were not recognized (OpenMPI, Mumps, etc…) > > The problem was indeed in share/SConstruct: both include_path and library_path > were added to CPPPATH. So replacing line 127 by: > > env.Append(LIBPATH = to_string_list("library_path")) > > solves the problem. > > This problem is present in both the 5.1.2 release and the development version. Right. This is now fixed in my Git repository on Gitorious, and it will be part of version 5.2 (to be released soon). Thank you for reporting this bug. Best regards, Vivien Mallet. 
From: Sebastien Gilles <sebastien.gilles@in...>  20130129 09:49:44

Hi, I didn't manage to compile test/program (and I still don't, but that's not my point at the moment). The issue was that the libraries I give in the SConstruct were not recognized (OpenMPI, Mumps, etc…) The problem was indeed in share/SConstruct: both include_path and library_path were added to CPPPATH. So replacing line 127 by: env.Append(LIBPATH = to_string_list("library_path")) solves the problem. This problem is present in both the 5.1.2 release and the development version. Best regards, Sebastien Gilles Inria/M3DISIM Team 