From: Arvind A. <arv...@gm...> - 2008-07-10 10:22:59
|
Dear Developers and Users, Firstly, thank you for this piece of software! I have been using the Debian package for a month now, and have been impressed with its capabilities. However, since I need to enable VTK and SLEPc, I have been trying to compile libmesh from sources. I need SLEPc since I am trying to solve aome quantum physics problems. This is resulting in errors which I list below. Some details regarding my configuration ( the computer is a Pentium D system) (a) gcc-4.2 and g++-4.2 for compilation. (b) mpich 1.2.7 installed from debian repo. (c) Compiled Petsc-2.3.3-p13 from source with the following config : config/configure.py --with-cc=gcc --with-cxx=g++ --with-fc=g77 --with-mpi-dir=/usr/lib/mpich/ --with-debugging=1 --with-shared (d)Compiled Slepc-2.3.3(including latest patches) from source. (e)Libmesh config : ./configure --prefix=/usr/local/libmesh --enable-slepc --enable-vtk --with-vtk-include=/usr/include/vtk-5.0 ================================================================================= With libmesh-0.6.2, this results in the following error : Compiling C++ (in debug mode) src/numerics/slepc_eigen_solver.C... src/numerics/slepc_eigen_solver.C: In member function 'void SlepcEigenSolver<T>::init()': src/numerics/slepc_eigen_solver.C:73: error: 'EPSOrthogonalizationRefinementType' was not declared in this scope src/numerics/slepc_eigen_solver.C:73: error: expected `;' before 'refinement' src/numerics/slepc_eigen_solver.C:75: error: 'refinement' was not declared in this scope src/numerics/slepc_eigen_solver.C:75: error: there are no arguments to 'EPSGetOrthogonalization' that depend on a template parameter, so a declaration of 'EPSGetOrthogonalization' must be available src/numerics/slepc_eigen_solver.C:75: error: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated) src/numerics/slepc_eigen_solver.C:76: error: 'EPS_MGS_ORTH' was not declared in this scope src/numerics/slepc_eigen_solver.C:76: error: there are no arguments to 'EPSSetOrthogonalization' that depend on a template parameter, so a declaration of 'EPSSetOrthogonalization' must be available src/numerics/slepc_eigen_solver.C: In member function 'void SlepcEigenSolver<T>::init() [with T = double]': src/numerics/slepc_eigen_solver.C:492: instantiated from here src/numerics/slepc_eigen_solver.C:75: error: 'EPSGetOrthogonalization' was not declared in this scope src/numerics/slepc_eigen_solver.C:75: error: 'EPSSetOrthogonalization' was not declared in this scope make: *** [src/numerics/slepc_eigen_solver.i686-pc-linux-gnu.dbg.o] Error 1 If I disable slepc, I am able to compile the libmesh library. However, I get the following error when I run the example (say ex3) $./ex3-dbg 0 - <NO ERROR MESSAGE> : Could not convert index -1246757120 into a pointer The index may be an incorrect argument. Possible sources of this problem are a missing "include 'mpif.h'", a misspelled MPI object (e.g., MPI_COM_WORLD instead of MPI_COMM_WORLD) or a misspelled user variable for an MPI object (e.g., com instead of comm). [0] Aborting program ! [0] Aborting program! p0_9932: p4_error: : 9039 ============================================================================= With libmesh from svn (release 2924), I get the follwoing errors Compiling C++ (in debug mode) src/solvers/equation_systems.C... src/solvers/equation_systems.C: In member function 'void EquationSystems::reinit()': src/solvers/equation_systems.C:134: error: 'n_sys' was not declared in this scope src/solvers/equation_systems.C:141: error: 'n_sys' was not declared in this scope make: *** [src/solvers/equation_systems.i686-pc-linux-gnu.dbg.o] Error 1 In fact, the above error with r 2924 is reported even when I disable slepc. Looking at the archives of libmesh-devel and libmesh-users, I understand that there are issues regarding imcompatibilities arising between petsc minor releases. This could possibly explain why libmesh-0.6.2 generates errors against petsc-2.3.3-13; however, I thought that the latest svn code of libmesh would compile against the latest petsc and slepc versions. Any ideas as to how I can resolve these issues? Also, please let me know which version of Petsc / Slepc works successfully against libmesh-0.6.2 and the latest svn version. Look forward to your replies. Thanking you With regards Arvind Ajoy PhD Student Department of Electrical Engineering Indian Institute of Technology Madras India |
From: Roy S. <ro...@st...> - 2008-07-10 13:12:25
|
On Thu, 10 Jul 2008, Arvind Ajoy wrote: > (d)Compiled Slepc-2.3.3(including latest patches) from source. > > With libmesh-0.6.2, this results in the following error : > > src/numerics/slepc_eigen_solver.C:73: error: > 'EPSOrthogonalizationRefinementType' was not declared in this scope It looks like there's been a change in the SLEPc API since libMesh 0.6.2 was released; in the libMesh SVN head we've got any references to "EPS Orthogonalization" symbols wrapped in the "#if SLEPC_VERSION_LESS_THAN(2,3,3)" case of preprocessor directives. You'll need to either switch to the SVN version of libMesh or switch to an older version of SLEPc. --- Roy |
From: Arvind A. <arv...@gm...> - 2008-07-10 13:19:30
|
Dear Roy Thank you for your quick reply! Which versions of slepc, petsc do you use with the svn head? I will use the same! I have tried using the libmesh SVN head too. As I mentioned in the previous post, I get the following error using slepc-2.3.3 and libmesh-r2924(the latest svn head) : Compiling C++ (in debug mode) src/solvers/equation_systems.C... src/solvers/equation_systems.C: In member function 'void EquationSystems::reinit()': src/solvers/equation_systems.C:134: error: 'n_sys' was not declared in this scope src/solvers/equation_systems.C:141: error: 'n_sys' was not declared in this scope make: *** [src/solvers/equation_systems.i686-pc-linux-gnu.dbg.o] Error 1 In fact, the above error with r 2924 is reported even when I disable slepc. I guess you missed this in the previous post. My apologies... it did become quite long. Thanks Arvind On Thu, Jul 10, 2008 at 6:42 PM, Roy Stogner <ro...@st...> wrote: > > On Thu, 10 Jul 2008, Arvind Ajoy wrote: > > (d)Compiled Slepc-2.3.3(including latest patches) from source. >> >> With libmesh-0.6.2, this results in the following error : >> >> src/numerics/slepc_eigen_solver.C:73: error: >> 'EPSOrthogonalizationRefinementType' was not declared in this scope >> > > It looks like there's been a change in the SLEPc API since libMesh > 0.6.2 was released; in the libMesh SVN head we've got any references > to "EPS Orthogonalization" symbols wrapped in the "#if > SLEPC_VERSION_LESS_THAN(2,3,3)" case of preprocessor directives. > You'll need to either switch to the SVN version of libMesh or switch > to an older version of SLEPc. > --- > Roy > |
From: Roy S. <ro...@st...> - 2008-07-10 14:19:11
|
On Thu, 10 Jul 2008, Arvind Ajoy wrote: > Compiling C++ (in debug mode) src/solvers/equation_systems.C... > src/solvers/equation_systems.C: In member function 'void > EquationSystems::reinit()': > src/solvers/equation_systems.C:134: error: 'n_sys' was not declared in this > scope Try another svn update. It looks like John accidentally introduced a regression when he was trying to clean up some "unused variables" warnings for debug-mode-only variables, but he's got it fixed now. --- Roy |
From: John P. <jwp...@gm...> - 2008-07-10 13:31:21
|
On Thu, Jul 10, 2008 at 8:23 AM, John Peterson <jwp...@gm...> wrote: > Hi Arvind, > > There were several interesting errors you came across! > > On Thu, Jul 10, 2008 at 5:22 AM, Arvind Ajoy <arv...@gm...> wrote: >> >> If I disable slepc, I am able to compile the libmesh library. However, I get >> the following error when I run the example (say ex3) >> $./ex3-dbg >> 0 - <NO ERROR MESSAGE> : Could not convert index -1246757120 into a pointer >> The index may be an incorrect argument. >> Possible sources of this problem are a missing "include 'mpif.h'", >> a misspelled MPI object (e.g., MPI_COM_WORLD instead of MPI_COMM_WORLD) >> or a misspelled user variable for an MPI object (e.g., >> com instead of comm). > > I've seen this error arise when mpich1 is built with a Fortran > compiler. There may be a previous posting in the mailing list on this > topic. I believe it only effects 64-bit machines, where a pointer is > stored as a 4 byte int for some reason. (So far this only happened to > me on a Mac.) The fix was to compile PETSc only using C compilers and > C-blas, IIRC. Here's the relevant part of that email: * Note: when I attempted to compile PETSc with Fortran compilers, none of the resulting executables would actually run. The runtime error message was | 0 - <NO ERROR MESSAGE> : Could not convert index 12079072 into a pointer | The index may be an incorrect argument. | Possible sources of this problem are a missing "include 'mpif.h'", | a misspelled MPI object (e.g., MPI_COM_WORLD instead of MPI_COMM_WORLD) | or a misspelled user variable for an MPI object (e.g., | com instead of comm). | [0] Aborting program ! | [0] Aborting program! | p0_84805: p4_error: : 9039 And apparently has something to do with the MPI Fortran interface storing pointers as type INTEGER, which works on 32-bit machines but not 64. This website appears to have more info: http://www.pgroup.com/userforum/viewtopic.php?start=0&t=560 >> >> With libmesh from svn (release 2924), I get the follwoing errors >> >> Compiling C++ (in debug mode) src/solvers/equation_systems.C... >> src/solvers/equation_systems.C: In member function 'void >> EquationSystems::reinit()': >> src/solvers/equation_systems.C:134: error: 'n_sys' was not declared in this >> scope >> src/solvers/equation_systems.C:141: error: 'n_sys' was not declared in this >> scope >> make: *** [src/solvers/equation_systems.i686-pc-linux-gnu.dbg.o] Error 1 > > This one is probably my fault. I recently removed some unused > variables and may have gone too far! I'm looking into it now. Yep, I never re-compiled in debug mode, but I'm doing it now and fixing the errors. -- John |
From: Mathias N. <mne...@tu...> - 2008-07-10 13:45:00
|
Hello, I'm using PETSc Version 2.3.3 with the following configuration: ./config/configure.py --download-f-blas-lapack=1 --with-mpi --download-mpich=1 --with-superlu=1 --download-superlu=1 --with-superlu_dist=1 --download-superlu_dist=1 --with-debugging=0 --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-mpi-compilers=0 ** Everything is fine (no compilation errors for PETSc and libmesh ) till I run a programm using libmesh in the following manner: ./programm -mat_type superlu -pc_type lu The default solver GMRES still runs correctly. After a few tests I figured out that the problem doesn't exist if I compile PETSc in DEBUG mode. Does anyone have the same problem - is this a bug in PETSc (optimized mode)? To keep computational time low it would be nice to use SuperLU also in optimized mode. Thanks in advance, Mathias N. |
From: John P. <jwp...@gm...> - 2008-07-10 13:49:12
|
On Thu, Jul 10, 2008 at 8:44 AM, Mathias Nenning <mne...@tu...> wrote: > Hello, > > I'm using PETSc Version 2.3.3 with the following configuration: > > ./config/configure.py --download-f-blas-lapack=1 --with-mpi > --download-mpich=1 --with-superlu=1 --download-superlu=1 > --with-superlu_dist=1 --download-superlu_dist=1 --with-debugging=0 > --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-mpi-compilers=0 > ** > Everything is fine (no compilation errors for PETSc and libmesh ) till I > run a programm using libmesh in the following manner: > > ./programm -mat_type superlu -pc_type lu And what was the error message? > The default solver GMRES still runs correctly. > After a few tests I figured out that the problem doesn't exist if I > compile PETSc in DEBUG mode. Interesting. > Does anyone have the same problem - is this a bug in PETSc (optimized mode)? > To keep computational time low it would be nice to use SuperLU also in > optimized mode. Off the top of my head I'd have to guess it's the -mat_type argument, since that option is relatively untested as far as I know. -- John |
From: Mathias N. <mne...@tu...> - 2008-07-10 14:02:29
|
John Peterson wrote: > On Thu, Jul 10, 2008 at 8:44 AM, Mathias Nenning <mne...@tu...> wrote: > >> Hello, >> >> I'm using PETSc Version 2.3.3 with the following configuration: >> >> ./config/configure.py --download-f-blas-lapack=1 --with-mpi >> --download-mpich=1 --with-superlu=1 --download-superlu=1 >> --with-superlu_dist=1 --download-superlu_dist=1 --with-debugging=0 >> --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-mpi-compilers=0 >> ** >> Everything is fine (no compilation errors for PETSc and libmesh ) till I >> run a programm using libmesh in the following manner: >> >> ./programm -mat_type superlu -pc_type lu >> > > And what was the error message? > > no error message - infinite loop I think - I had to kill the programm... >> The default solver GMRES still runs correctly. >> After a few tests I figured out that the problem doesn't exist if I >> compile PETSc in DEBUG mode. >> > > Interesting. > > >> Does anyone have the same problem - is this a bug in PETSc (optimized mode)? >> To keep computational time low it would be nice to use SuperLU also in >> optimized mode. >> > > Off the top of my head I'd have to guess it's the -mat_type argument, > since that option is relatively untested as far as I know. > > This means I can't use SuperLU in optimized mode? Now I tried to run the programm just using ./programm -pc_type lu and it works... But know I'm not shure about what kind of LU-solver is used. Mathias N. |
From: John P. <jwp...@gm...> - 2008-07-10 14:36:12
|
On Thu, Jul 10, 2008 at 9:02 AM, Mathias Nenning <mne...@tu...> wrote: > John Peterson wrote: >> Off the top of my head I'd have to guess it's the -mat_type argument, >> since that option is relatively untested as far as I know. >> >> > This means I can't use SuperLU in optimized mode? Not until we figure out the bug anyway... Taking a look at petsc_matrix.C, I see that we do call MatSetFromOptions() after MatCreateSeqAIJ and MatCreateMPIAIJ, which I'm assuming is the correct syntax. So it should be respecting your command line argument, but I'm not sure why it wouldn't work beyond there in optimized petsc but it does work in debug petsc. Maybe a petsc bug? Anyway, it will be hard to diagnose without debugging symbols. > Now I tried to run the programm just using > > ./programm -pc_type lu > > and it works... > But know I'm not shure about what kind of LU-solver is used. I believe it's a serial LU solver only. -- John |
From: Mathias N. <mne...@tu...> - 2008-07-10 14:39:23
|
John Peterson wrote: > On Thu, Jul 10, 2008 at 9:02 AM, Mathias Nenning <mne...@tu...> wrote: > >> John Peterson wrote: >> >>> Off the top of my head I'd have to guess it's the -mat_type argument, >>> since that option is relatively untested as far as I know. >>> >>> >>> >> This means I can't use SuperLU in optimized mode? >> > > Not until we figure out the bug anyway... > > Taking a look at petsc_matrix.C, I see that we do call > MatSetFromOptions() after MatCreateSeqAIJ and MatCreateMPIAIJ, which > I'm assuming is the correct syntax. So it should be respecting your > command line argument, but I'm not sure why it wouldn't work beyond > there in optimized petsc but it does work in debug petsc. Maybe a > petsc bug? That's my guess, too > Anyway, it will be hard to diagnose without debugging > symbols. > > > >> Now I tried to run the programm just using >> >> ./programm -pc_type lu >> >> and it works... >> But know I'm not shure about what kind of LU-solver is used. >> > > I believe it's a serial LU solver only. > > Thanks anyways , I'm fine with that at the moment! |
From: Mathias N. <mne...@tu...> - 2008-07-11 07:15:28
|
Hello, I'm using the class NewmarkSystem. For solving the equation system I need to use a direct Solver. For this reason I start my programm like this: ./programm -pc_type lu The only drawback is that the Solver now calculates the LU-decomposition at every time step, as far as I know. But this is just needed in the first time step and for further time steps only the forward and backward substitution is needed. So, is there a easy way to handle this problem... Thanks in advance, Mathias N. |
From: John P. <jwp...@gm...> - 2008-07-11 14:12:49
|
On Fri, Jul 11, 2008 at 2:15 AM, Mathias Nenning <mne...@tu...> wrote: > > So, is there a easy way to handle this problem... > You may need to look around the petsc documentation. I believe you can also use LU as a *preconditioner* in which case the iterative solver converges in one iteration. You may be able to avoid recomputing the preconditioner if the matrix is not changing, and continue solving in one iteration... -- John |
From: John P. <jwp...@gm...> - 2008-07-11 14:14:27
|
On Fri, Jul 11, 2008 at 9:12 AM, John Peterson <jwp...@gm...> wrote: > On Fri, Jul 11, 2008 at 2:15 AM, Mathias Nenning <mne...@tu...> wrote: >> >> So, is there a easy way to handle this problem... >> > > You may need to look around the petsc documentation. I believe you > can also use LU as a *preconditioner* in which case the iterative Ooops, of course, this is what you are already doing. Sorry, haven't had coffee yet :-) But I still think there may be a way to tell PETSc to avoid recomputing the preconditioner at every timestep. -- John |
From: David K. <dav...@gm...> - 2008-07-11 14:29:29
|
> Ooops, of course, this is what you are already doing. Sorry, haven't > had coffee yet :-) > > But I still think there may be a way to tell PETSc to avoid > recomputing the preconditioner at every timestep. > From the petsc manual: "When solving multiple linear systems of the same size with the same method, several options are available. To solve successive linear systems having the same preconditioner matrix (i.e., the same data structure with exactly the same matrix elements) but different right-hand-side vectors, the user should simply call KSPSolve,() multiple times. The preconditioner setup operations (e.g., factorization for ILU) will be done during the first call to KSPSolve() only; such operations will not be repeated for successive solves. To solve successive linear systems that have different preconditioner matrices (i.e., the matrix elements and/or the matrix data structure change), the user must call KSPSetOperators() and KSPSolve() for each solve. See Section 4.1 for a description of various flags for KSPSetOperators() that can save work for such cases." Also, it says that in order to use LU, you should use the option "-ksp_type preonly -pc_type lu" - Dave |
From: Mathias N. <mne...@tu...> - 2008-07-14 12:12:33
|
When I use the command line option -ksp_type preonly -pc_type lu I have to set ierr = KSPSetInitialGuessNonzero (_ksp, PETSC_FALSE); ( line 225 in "petsc_linear_solver.C" ) elsewhere I get the error message [0]PETSC ERROR: Running KSP of preonly doesn't make sense with nonzero initial guess you probably want a KSP type of Richardson! ... but still PETSC is doing the LU decomposition at every time step... For this reason I set the option: ierr = KSPSetOperators(_ksp, matrix->mat(), precond->mat(),SAME_PRECONDITIONER); ( line 384 in "petsc_linear_solver.C" ) No everything is fine... But I think it's pretty weird that PETSC gets the option "-pc_type lu" but not "-ksp_type preonly"... Also it would be nice to have some direct access to the "ksp"-object to change default settings... This is possible in class "PetscLinearSolver" but not from its base class "Linearsolver", which is the object given in the NewmarkSystem... For now I'm fine by doing the changes directly in the source code... Thanks - Mathias David Knezevic wrote: >> Ooops, of course, this is what you are already doing. Sorry, haven't >> had coffee yet :-) >> >> But I still think there may be a way to tell PETSc to avoid >> recomputing the preconditioner at every timestep. >> >> > > From the petsc manual: > "When solving multiple linear systems of the same size with the same > method, several options are available. > To solve successive linear systems having the same preconditioner matrix > (i.e., the same data structure > with exactly the same matrix elements) but different right-hand-side > vectors, the user should simply call > KSPSolve,() multiple times. The preconditioner setup operations (e.g., > factorization for ILU) will be done > during the first call to KSPSolve() only; such operations will not be > repeated for successive solves. > To solve successive linear systems that have different > preconditioner matrices (i.e., the matrix elements > and/or the matrix data structure change), the user must call > KSPSetOperators() and KSPSolve() for each > solve. See Section 4.1 for a description of various flags for > KSPSetOperators() that can save work for such > cases." > > Also, it says that in order to use LU, you should use the option > "-ksp_type preonly -pc_type lu" > > > - Dave > > > ------------------------------------------------------------------------- > Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW! > Studies have shown that voting for your favorite open source project, > along with a healthy diet, reduces your potential for chronic lameness > and boredom. Vote Now at http://www.sourceforge.net/community/cca08 > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > -- DDI Mathias Nenning Institute of Applied Mechanics Tel: +43/(0)316/873-7143 Graz University of Technology Fax: +43/(0)316/873-7641 Technikerstr. 4/II e-mail: mnenning@TUGraz.at 8010 Graz http://www.mech.TUGraz.at |
From: John P. <jwp...@gm...> - 2008-07-14 13:19:55
|
On Mon, Jul 14, 2008 at 7:12 AM, Mathias Nenning <mne...@tu...> wrote: > > Also it would be nice to have some direct access to the "ksp"-object to > change default settings... This is possible in class "PetscLinearSolver" > but not from its base class "Linearsolver", which is the object given in > the NewmarkSystem... Since we don't have an abstraction of a "ksp" object for all linear solvers, we can't return any object like that from the base class... However, you are free to dynamically cast a LinearSolver pointer or reference to a PetscLinearSolver and pull the ksp object from there. -- John |