On Wed, Apr 18, 2012 at 11:47 AM, Jens Lohne Eftang <jleftang@...:
> petsch make test runs ex19 with 1 and 2 mpi processes and ex5f with 1 mpi
> process sucessfully.
I'm guessing the problem is with the way libMesh uses PETSc's compilers.
I'm not sure exactly how libMesh deals with it when PETSc doesn't define a
Perhaps then an mpicxx from another mpi install ends up being used?
Maybe John can answer that.
Without digging deep into libMesh I would recommend using a different PETSc
--with-clanguage=C++ to ensure that PETSc configures a C++ compiler.
> mpicc -show returns
In light of what I said above this may be irrelevant (since we need to
figure out which C++ (not C) compiler libMesh uses),
but still: which mpicc is this? The fact that it links executables
against a different mpich than the one you built
makes me suspect that this isn't the right mpicc (i.e., not the one PETSc
was built with).
> gcc -I/usr/local/include -L/usr/local/lib -Wl,-rpath,/usr/local/lib
> -lmpich -lopa -lmpl -lrt -lpthread
> Thanks again!
> On 04/17/2012 11:23 PM, Dmitry Karpeev wrote:
> The PETSc configuration seems to be fine.
> Are you able to run PETSc tests?
> cd /home/eftang/fem_software/petsc-3.2-p5
> make PETSC_DIR=/home/eftang/fem_software/petsc-3.2-p5
> PETSC_ARCH=arch-linux2-c-opt test
> The compiler that gets configured by PETSc is a wrapper C compiler
> inherited from mpich
> Check to see what shared linker paths it really includes:
> /home/eftang/fem_software/mpich2-install/bin/mpicc -show
> It's possible that libMesh overrides compilers, though.
> Since libMesh needs a C++ compiler and in your case PETSc doesn't
> configure one,
> I'm not sure what libMesh ends up using to compile its C++ code.
> If that's the problem, you might want to reconfigure PETSc
> On Tue, Apr 17, 2012 at 9:47 AM, John Peterson <jwpeterson@...:
>> On Mon, Apr 16, 2012 at 5:45 PM, Jens Lohne Eftang <jleftang@...>
>> > On 04/16/2012 07:31 PM, John Peterson wrote:
>> >> On Mon, Apr 16, 2012 at 5:23 PM, Jens Lohne Eftang<jleftang@...>
>> >> wrote:
>> >>> Thanks for you reply.
>> >>> the libmesh_LIBS output has references to mpi, -lmpich and -lmpichf90.
>> >>> Would
>> >>> it help to post the whole output?
>> >> Are they preceded by something like -Wl,-rpath, in the libmesh_LIBS
>> >> output?
>> >> Perhaps something like:
>> >> -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib
>> > Yes, for example ...
>> > -Wl,-rpath,/home/eftang/fem_software/mpich2-install/lib
>> > -L/home/eftang/fem_software/mpich2-install/lib
>> > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.6
>> > -L/usr/lib/gcc/x86_64-linux-gnu/4.4.6
>> > -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu
>> > -L/lib/x86_64-linux-gnu -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s
>> > -lmpichf90 -lgfortran ...
>> > it's a rather long output though...
>> >> ?
>> >> What is the output if you run 'nm' on the MPI shared libraries of your
>> >> system, and grep for mpi_bcast_ ?
>> > nm * | grep mpi_bcast_ in the mpich2-install/lib folder returns
>> > 0000000000000000 T mpi_bcast_
>> > 0000000000000000 W mpi_bcast__
>> > 00000000000164f0 T mpi_bcast_
>> > 00000000000164f0 W mpi_bcast__
>> > 00000000000164f0 T mpi_bcast_
>> Hmm... unfortunately I don't see anything that's obviously wrong yet.
>> Is there any chance you have changed/upgraded compilers between the
>> time you built built mpich/petsc and the time you tried to build
>> One other thing you might try: have petsc download mpich along with
>> everything else instead of using your existing mpich install...
>> Better than sec? Nothing is better than sec when it comes to
>> monitoring Big Data applications. Try Boundary one-second
>> resolution app monitoring today. Free.
>> Libmesh-users mailing list