Forgot to reply-all...
> Hi Ben,
>> What about
>> libMesh::init (argc, argv, MPI_COMM_SELF);
>> Check out src/base/libmesh.C...
> Thanks! That works! However, it ensures that an independent PETSc
> is instantiated on each processor. It turns out that what I need is
> a global PETSc instantiation (because I use this to partition the
> PETSc dense matrix that I'm using to store the solution data).
> After a bit of trial and error I've got what I need to work. I've
> used a --force-sequential flag to initialize libMesh's
> MPI_Communicator (on line 146 libmesh.C) to MPI_COMM_SELF, but then
> set PETSc's communicator to COMM_WORLD_IN on line 159...
> The only downside is that I get the performance data printed out
> once for each processor, because within libMesh each processor
> thinks that it is processor 0. How do you turn off the performance
> data printout?
>> On 8/30/07 4:59 PM, "David Knezevic"
>>>> Ah, good point. We want MPI to be running, but with a different
>>>> communicator on each process, is that right? I've actually had
>>>> happen by accident before when OpenMPI and MPICH had conflicting
>>>> binaries on the same machine, but I have no idea how you would go
>>>> about doing that on purpose... ;-)
>>> All that is needed is to initialize PETSc on each processor with
>>> PETSC_COMM_SELF, rather than PETSC_COMM_WORLD. This is exactly what
>>> is done when we use PETSc on a single processor, which is why I'm
>>> trying to fool libMesh into thinking it's running on a single
>>> - dave
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems? Stop.
>>> Now Search log events and configuration files using AJAX and a
>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>> Libmesh-users mailing list