From: Jens L. E. <jle...@gm...> - 2012-04-15 20:11:24
|
Dear all, I want to use the mumps direct sparse solvers. I've (seemingly) successfully configured and compiled petsc with mumps, and then configured and compiled libmesh. Compiling programs works fine, but when I try to run my program with ./program -pc_factor_mat_solver_package mumps I get the run time error symbol lookup error: /home/eftang/fem_software/petsc-3.2-p5/arch-linux2-c-opt/lib/libpetsc.so: undefined symbol: mpi_bcast_ when my program attempts the solve. Any ideas? Best, Jens Lohne Eftang |
From: John P. <jwp...@gm...> - 2012-04-16 14:03:35
|
On Sun, Apr 15, 2012 at 2:11 PM, Jens Lohne Eftang <jle...@gm...> wrote: > Dear all, > > I want to use the mumps direct sparse solvers. I've (seemingly) > successfully configured and compiled petsc with mumps, and then > configured and compiled libmesh. Compiling programs works fine, but when > I try to run my program with > > ./program -pc_factor_mat_solver_package mumps > > I get the run time error > > symbol lookup error: > /home/eftang/fem_software/petsc-3.2-p5/arch-linux2-c-opt/lib/libpetsc.so: undefined > symbol: mpi_bcast_ > > when my program attempts the solve. It almost seems like the location of your MPI libraries is not in LD_LIBRARY_PATH and it has not been set with linker options, e.g. -Wl,-rpath,/opt/packages/mpich2/mpich2-1.3.2/gnu-opt/lib Could you check the libmesh_LIBS output of 'make echo' for any references to MPI? Libmesh usually gets its MPI flags from PETSc, specifically the petscconf/petscvariables file, so you should check there as well to see if those are correct... -- John |
From: Dmitry K. <ka...@mc...> - 2012-04-16 14:13:50
|
This seems to be a petsc configuration problem. Have your MPI libraries moved by any chance? >From the form of the missing symbol, I suspect the Fortran name mangling may be the culprit. Can you send PETSc's configure.log so I can take a look? Thanks. Dmitry. On Mon, Apr 16, 2012 at 9:03 AM, John Peterson <jwp...@gm...> wrote: > On Sun, Apr 15, 2012 at 2:11 PM, Jens Lohne Eftang <jle...@gm...> > wrote: > > Dear all, > > > > I want to use the mumps direct sparse solvers. I've (seemingly) > > successfully configured and compiled petsc with mumps, and then > > configured and compiled libmesh. Compiling programs works fine, but when > > I try to run my program with > > > > ./program -pc_factor_mat_solver_package mumps > > > > I get the run time error > > > > symbol lookup error: > > > /home/eftang/fem_software/petsc-3.2-p5/arch-linux2-c-opt/lib/libpetsc.so: > undefined > > symbol: mpi_bcast_ > > > > when my program attempts the solve. > > It almost seems like the location of your MPI libraries is not in > LD_LIBRARY_PATH and it has not been set with linker options, e.g. > > -Wl,-rpath,/opt/packages/mpich2/mpich2-1.3.2/gnu-opt/lib > > Could you check the libmesh_LIBS output of 'make echo' for any > references to MPI? > > Libmesh usually gets its MPI flags from PETSc, specifically the > petscconf/petscvariables file, so you should check there as well to > see if those are correct... > > -- > John > > > ------------------------------------------------------------------------------ > For Developers, A Lot Can Happen In A Second. > Boundary is the first to Know...and Tell You. > Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! > http://p.sf.net/sfu/Boundary-d2dvs2 > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Jens L. E. <jle...@gm...> - 2012-04-16 23:23:28
|
Thanks for you reply. the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. Would it help to post the whole output? Jens On 04/16/2012 10:03 AM, John Peterson wrote: > On Sun, Apr 15, 2012 at 2:11 PM, Jens Lohne Eftang<jle...@gm...> wrote: >> Dear all, >> >> I want to use the mumps direct sparse solvers. I've (seemingly) >> successfully configured and compiled petsc with mumps, and then >> configured and compiled libmesh. Compiling programs works fine, but when >> I try to run my program with >> >> ./program -pc_factor_mat_solver_package mumps >> >> I get the run time error >> >> symbol lookup error: >> /home/eftang/fem_software/petsc-3.2-p5/arch-linux2-c-opt/lib/libpetsc.so: undefined >> symbol: mpi_bcast_ >> >> when my program attempts the solve. > It almost seems like the location of your MPI libraries is not in > LD_LIBRARY_PATH and it has not been set with linker options, e.g. > > -Wl,-rpath,/opt/packages/mpich2/mpich2-1.3.2/gnu-opt/lib > > Could you check the libmesh_LIBS output of 'make echo' for any > references to MPI? > > Libmesh usually gets its MPI flags from PETSc, specifically the > petscconf/petscvariables file, so you should check there as well to > see if those are correct... > |
From: John P. <jwp...@gm...> - 2012-04-16 23:32:10
|
On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang <jle...@gm...> wrote: > Thanks for you reply. > > the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. Would > it help to post the whole output? Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS output? Perhaps something like: -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib ? What is the output if you run 'nm' on the MPI shared libraries of your system, and grep for mpi_bcast_ ? -- John |
From: Jens L. E. <jle...@gm...> - 2012-04-16 23:45:36
|
On 04/16/2012 07:31 PM, John Peterson wrote: > On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<jle...@gm...> wrote: >> Thanks for you reply. >> >> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. Would >> it help to post the whole output? > Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS output? > > Perhaps something like: > > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib Yes, for example ... -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib -L/home/eftang/fem_software/mpich2-install/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -lmpichf90 -lgfortran ... it's a rather long output though... > ? > > What is the output if you run 'nm' on the MPI shared libraries of your > system, and grep for mpi_bcast_ ? nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns 0000000000000000 T mpi_bcast_ 0000000000000000 W mpi_bcast__ 00000000000164f0 T mpi_bcast_ 00000000000164f0 W mpi_bcast__ 00000000000164f0 T mpi_bcast_ 00000000000164f0 W mpi_bcast__ 00000000000164f0 T mpi_bcast_ 00000000000164f0 W mpi_bcast__ 0000000000003611 T mpi_bcast_ 0000000000000000 W mpi_bcast_ 0000000000000000 W mpi_bcast__ 0000000000000000 T pmpi_bcast_ 0000000000000000 W pmpi_bcast__ 0000000000081f50 W mpi_bcast_ 0000000000081f50 W mpi_bcast__ 0000000000081f50 T pmpi_bcast_ 0000000000081f50 W pmpi_bcast__ 0000000000081f50 W mpi_bcast_ 0000000000081f50 W mpi_bcast__ 0000000000081f50 T pmpi_bcast_ 0000000000081f50 W pmpi_bcast__ 0000000000081f50 W mpi_bcast_ 0000000000081f50 W mpi_bcast__ 0000000000081f50 T pmpi_bcast_ 0000000000081f50 W pmpi_bcast__ |
From: John P. <jwp...@gm...> - 2012-04-17 14:47:51
|
On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang <jle...@gm...> wrote: > On 04/16/2012 07:31 PM, John Peterson wrote: >> >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<jle...@gm...> >> wrote: >>> >>> Thanks for you reply. >>> >>> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. >>> Would >>> it help to post the whole output? >> >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS >> output? >> >> Perhaps something like: >> >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > Yes, for example ... > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > -L/home/eftang/fem_software/mpich2-install/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 -Wl,-rpath,/usr/lib/x86_64-linux-gnu > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s > -lmpichf90 -lgfortran ... > > it's a rather long output though... > > >> ? >> >> What is the output if you run 'nm' on the MPI shared libraries of your >> system, and grep for mpi_bcast_ ? > > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns > > 0000000000000000 T mpi_bcast_ > 0000000000000000 W mpi_bcast__ > 00000000000164f0 T mpi_bcast_ > 00000000000164f0 W mpi_bcast__ > 00000000000164f0 T mpi_bcast_ Hmm... unfortunately I don't see anything that's obviously wrong yet. Is there any chance you have changed/upgraded compilers between the time you built built mpich/petsc and the time you tried to build libmesh? One other thing you might try: have petsc download mpich along with everything else instead of using your existing mpich install... -- John |
From: Dmitry K. <ka...@mc...> - 2012-04-18 03:23:49
|
The PETSc configuration seems to be fine. Are you able to run PETSc tests? cd /home/eftang/fem_software/petsc-3.2-p5 make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5 PETSC_ARCH=arch-linux2-c-opt test The compiler that gets configured by PETSc is a wrapper C compiler inherited from mpich Check to see what shared linker paths it really includes: /home/eftang/fem_software/mpich2-install/bin/mpicc -show It's possible that libMesh overrides compilers, though. Since libMesh needs a C++ compiler and in your case PETSc doesn't configure one, I'm not sure what libMesh ends up using to compile its C++ code. If that's the problem, you might want to reconfigure PETSc --with-clanguage=C++ Dmitry. On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <jwp...@gm...> wrote: > On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang <jle...@gm...> > wrote: > > On 04/16/2012 07:31 PM, John Peterson wrote: > >> > >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<jle...@gm...> > >> wrote: > >>> > >>> Thanks for you reply. > >>> > >>> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. > >>> Would > >>> it help to post the whole output? > >> > >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS > >> output? > >> > >> Perhaps something like: > >> > >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > > > Yes, for example ... > > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > -L/home/eftang/fem_software/mpich2-install/lib > > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu > > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s > > -lmpichf90 -lgfortran ... > > > > it's a rather long output though... > > > > > >> ? > >> > >> What is the output if you run 'nm' on the MPI shared libraries of your > >> system, and grep for mpi_bcast_ ? > > > > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns > > > > 0000000000000000 T mpi_bcast_ > > 0000000000000000 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > 00000000000164f0 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > Hmm... unfortunately I don't see anything that's obviously wrong yet. > > Is there any chance you have changed/upgraded compilers between the > time you built built mpich/petsc and the time you tried to build > libmesh? > > One other thing you might try: have petsc download mpich along with > everything else instead of using your existing mpich install... > > -- > John > > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Jens L. E. <jle...@gm...> - 2012-04-18 16:48:01
|
petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi process sucessfully. mpicc -show returns gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath,/usr/local/lib -lmpich -lopa -lmpl -lrt -lpthread Thanks again! Jens On 04/17/2012 11:23 PM, Dmitry Karpeev wrote: > The PETSc configuration seems to be fine. > Are you able to run PETSc tests? > cd /home/eftang/fem_software/petsc-3.2-p5 > make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5 > PETSC_ARCH=arch-linux2-c-opt test > > The compiler that gets configured by PETSc is a wrapper C compiler > inherited from mpich > Check to see what shared linker paths it really includes: > /home/eftang/fem_software/mpich2-install/bin/mpicc -show > > It's possible that libMesh overrides compilers, though. > Since libMesh needs a C++ compiler and in your case PETSc doesn't > configure one, > I'm not sure what libMesh ends up using to compile its C++ code. > If that's the problem, you might want to reconfigure PETSc > --with-clanguage=C++ > > Dmitry. > > > > > > On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <jwp...@gm... > <mailto:jwp...@gm...>> wrote: > > On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang > <jle...@gm... <mailto:jle...@gm...>> wrote: > > On 04/16/2012 07:31 PM, John Peterson wrote: > >> > >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne > Eftang<jle...@gm... <mailto:jle...@gm...>> > >> wrote: > >>> > >>> Thanks for you reply. > >>> > >>> the libmesh_LIBS output has references to mpi, -lmpich and > -lmpichf90. > >>> Would > >>> it help to post the whole output? > >> > >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS > >> output? > >> > >> Perhaps something like: > >> > >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > > > Yes, for example ... > > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib > > -L/home/eftang/fem_software/mpich2-install/lib > > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu > > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu > > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread > -lgcc_s > > -lmpichf90 -lgfortran ... > > > > it's a rather long output though... > > > > > >> ? > >> > >> What is the output if you run 'nm' on the MPI shared libraries > of your > >> system, and grep for mpi_bcast_ ? > > > > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns > > > > 0000000000000000 T mpi_bcast_ > > 0000000000000000 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > 00000000000164f0 W mpi_bcast__ > > 00000000000164f0 T mpi_bcast_ > > Hmm... unfortunately I don't see anything that's obviously wrong yet. > > Is there any chance you have changed/upgraded compilers between the > time you built built mpich/petsc and the time you tried to build > libmesh? > > One other thing you might try: have petsc download mpich along with > everything else instead of using your existing mpich install... > > -- > John > > ------------------------------------------------------------------------------ > Better than sec? Nothing is better than sec when it comes to > monitoring Big Data applications. Try Boundary one-second > resolution app monitoring today. Free. > http://p.sf.net/sfu/Boundary-dev2dev > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > <mailto:Lib...@li...> > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > |
From: Dmitry K. <ka...@mc...> - 2012-04-18 18:05:46
|
On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang <jle...@gm...>wrote: > petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi > process sucessfully. > I'm guessing the problem is with the way libMesh uses PETSc's compilers. I'm not sure exactly how libMesh deals with it when PETSc doesn't define a C++ compiler. Perhaps then an mpicxx from another mpi install ends up being used? Maybe John can answer that. Without digging deep into libMesh I would recommend using a different PETSc configuration using --with-clanguage=C++ to ensure that PETSc configures a C++ compiler. > mpicc -show returns > In light of what I said above this may be irrelevant (since we need to figure out which C++ (not C) compiler libMesh uses), but still: which mpicc is this? The fact that it links executables against a different mpich than the one you built makes me suspect that this isn't the right mpicc (i.e., not the one PETSc was built with). Thanks. Dmitry. > > gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath,/usr/local/lib > -lmpich -lopa -lmpl -lrt -lpthread > > Thanks again! > > Jens > > > > On 04/17/2012 11:23 PM, Dmitry Karpeev wrote: > > The PETSc configuration seems to be fine. > Are you able to run PETSc tests? > cd /home/eftang/fem_software/petsc-3.2-p5 > make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5 > PETSC_ARCH=arch-linux2-c-opt test > > The compiler that gets configured by PETSc is a wrapper C compiler > inherited from mpich > Check to see what shared linker paths it really includes: > /home/eftang/fem_software/mpich2-install/bin/mpicc -show > > It's possible that libMesh overrides compilers, though. > Since libMesh needs a C++ compiler and in your case PETSc doesn't > configure one, > I'm not sure what libMesh ends up using to compile its C++ code. > If that's the problem, you might want to reconfigure PETSc > --with-clanguage=C++ > > Dmitry. > > > > > > On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <jwp...@gm...>wrote: > >> On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang <jle...@gm...> >> wrote: >> > On 04/16/2012 07:31 PM, John Peterson wrote: >> >> >> >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<jle...@gm...> >> >> wrote: >> >>> >> >>> Thanks for you reply. >> >>> >> >>> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90. >> >>> Would >> >>> it help to post the whole output? >> >> >> >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS >> >> output? >> >> >> >> Perhaps something like: >> >> >> >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib >> > >> > Yes, for example ... >> > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib >> > -L/home/eftang/fem_software/mpich2-install/lib >> > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6 >> > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6 >> -Wl,-rpath,/usr/lib/x86_64-linux-gnu >> > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu >> > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s >> > -lmpichf90 -lgfortran ... >> > >> > it's a rather long output though... >> > >> > >> >> ? >> >> >> >> What is the output if you run 'nm' on the MPI shared libraries of your >> >> system, and grep for mpi_bcast_ ? >> > >> > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns >> > >> > 0000000000000000 T mpi_bcast_ >> > 0000000000000000 W mpi_bcast__ >> > 00000000000164f0 T mpi_bcast_ >> > 00000000000164f0 W mpi_bcast__ >> > 00000000000164f0 T mpi_bcast_ >> >> Hmm... unfortunately I don't see anything that's obviously wrong yet. >> >> Is there any chance you have changed/upgraded compilers between the >> time you built built mpich/petsc and the time you tried to build >> libmesh? >> >> One other thing you might try: have petsc download mpich along with >> everything else instead of using your existing mpich install... >> >> -- >> John >> >> >> ------------------------------------------------------------------------------ >> Better than sec? Nothing is better than sec when it comes to >> monitoring Big Data applications. Try Boundary one-second >> resolution app monitoring today. Free. >> http://p.sf.net/sfu/Boundary-dev2dev >> _______________________________________________ >> Libmesh-users mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > > > |
From: John P. <jwp...@gm...> - 2012-04-18 18:33:29
|
On Wed, Apr 18, 2012 at 12:05 PM, Dmitry Karpeev <ka...@mc...> wrote: > On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang <jle...@gm...> > wrote: >> >> petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi >> process sucessfully. > > I'm guessing the problem is with the way libMesh uses PETSc's compilers. > I'm not sure exactly how libMesh deals with it when PETSc doesn't define a > C++ compiler. > Perhaps then an mpicxx from another mpi install ends up being used? > Maybe John can answer that. I suppose it's possible if he has multiple MPI installations on his system? Libmesh uses the first mpicxx it finds in your path unless you specify CXX=/path/to/some/mpi/mpicxx ./configure ... -- John |
From: Dmitry K. <ka...@mc...> - 2012-04-18 18:39:37
|
On Wed, Apr 18, 2012 at 1:32 PM, John Peterson <jwp...@gm...> wrote: > On Wed, Apr 18, 2012 at 12:05 PM, Dmitry Karpeev <ka...@mc...> > wrote: > > On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang <jle...@gm...> > > wrote: > >> > >> petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 > mpi > >> process sucessfully. > > > > I'm guessing the problem is with the way libMesh uses PETSc's compilers. > > I'm not sure exactly how libMesh deals with it when PETSc doesn't define > a > > C++ compiler. > > > Perhaps then an mpicxx from another mpi install ends up being used? > > Maybe John can answer that. > > I suppose it's possible if he has multiple MPI installations on his system? > > Libmesh uses the first mpicxx it finds in your path unless you specify > I was under the impression that libMesh tried to use the compilers that PETSc configured (provided libMesh was itself configured with PETSc). > > CXX=/path/to/some/mpi/mpicxx ./configure ... > Okay, then I think the problem is that PETSc is linked against mpi from /home/eftang/fem_software/mpich2-install, while the mpicxx that libMesh finds links the executable against a different mpich. I would recommend reconfiguring PETSc using --with-clanguage=C++ and making sure that libMesh uses the same mpicxx as PETSc. Dmitry. > > -- > John > |
From: Jens L. E. <jle...@gm...> - 2012-04-18 19:24:47
|
I did have two mpi's on my system which seems to be the cause of the problem. (apt-get install paraview install openmpi .... ). Explicitly pointing the libmesh configuration to the mpich2 binaries made mumps work! Thanks a lot! Jens On 04/18/2012 02:39 PM, Dmitry Karpeev wrote: > > > On Wed, Apr 18, 2012 at 1:32 PM, John Peterson <jwp...@gm... > <mailto:jwp...@gm...>> wrote: > > On Wed, Apr 18, 2012 at 12:05 PM, Dmitry Karpeev > <ka...@mc... <mailto:ka...@mc...>> wrote: > > On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang > <jle...@gm... <mailto:jle...@gm...>> > > wrote: > >> > >> petsch make test runs ex19 with 1 and 2 mpi processes and ex5f > with 1 mpi > >> process sucessfully. > > > > I'm guessing the problem is with the way libMesh uses PETSc's > compilers. > > I'm not sure exactly how libMesh deals with it when PETSc > doesn't define a > > C++ compiler. > > > Perhaps then an mpicxx from another mpi install ends up being used? > > Maybe John can answer that. > > I suppose it's possible if he has multiple MPI installations on > his system? > > Libmesh uses the first mpicxx it finds in your path unless you specify > > I was under the impression that libMesh tried to use the compilers > that PETSc configured > (provided libMesh was itself configured with PETSc). > > > CXX=/path/to/some/mpi/mpicxx ./configure ... > > Okay, then I think the problem is that PETSc is linked against mpi from > /home/eftang/fem_software/mpich2-install, > while the mpicxx that libMesh finds links the executable against a > different mpich. > > I would recommend reconfiguring PETSc using --with-clanguage=C++ and > making sure > that libMesh uses the same mpicxx as PETSc. > > Dmitry. > > > -- > John > > |