Makes sense! I was just making sure you weren't doing unnecessary work!
In that case - my favorite way to manually do a partitioning is creating a
Partitioner manually... then attaching it to the mesh like so:
mesh.partitioner() = AutoPtr<Partitioner>(new
Then that partitioner will automatically be used when prepare_for_use() is
Not sure if that helps you or not....
On Tue, Nov 13, 2012 at 12:57 PM, Andrew E Slaughter <
> Currently, I am working a writing a new VTKIO class for writing, in
> parallel, the vtk file format. I was having trouble with the libMesh VTKIO
> class, so I am writing my own (I wrote code for another project that does
> this, so I am just adapting it for libMesh; it will write the file
> directly, w/o VTK which may be nice, one less dependency to worry about).
> Anyway, I started looking at calling partitioning functions when I was
> running mpiexec and was not getting the results I expected with the
> processor ids in the Mesh class. So, I thought I must need to partition the
> mesh. But, I must have a problem with MPI, which I am trying to figure out
> at the moment by having PETSc compile its own MPI and then make sure
> libMesh uses it. Write now I am using mpich2 installed with my package
> On Tue, Nov 13, 2012 at 1:19 PM, Derek Gaston <friedmud@...> wrote:
>> You shouldn't need to use a partitioner directly... unless you're doing
>> something funky. Partitioning just happens "automagically" with libMesh
>> for most use cases. For instance you should be able to run any of the
>> examples in parallel (using mpiexec) and you'll see the mesh is
>> automatically partitioned.
>> What are you trying to do that would necessitate interacting directly
>> with a partitioner?
>> On Tue, Nov 13, 2012 at 9:46 AM, Roy Stogner <roystgnr@...:
>>> On Tue, 13 Nov 2012, Andrew E Slaughter wrote:
>>> > Thanks for fixing my stupid mistake (3 weeks of classdef's in MATLAB
>>> > has messed with me). The program now runs but when I run: mpiexec -n
>>> > 2 I just get the following, which is showing the mesh is not
>>> > parallel.
>>> Your previous code instantiated a Mesh, which (unless you configured
>>> with --enable-parmesh) defaults to SerialMesh, in which case you'd
>>> expect is_serial() to be true.
>>> But you'd still expect it to be partitioned into
>>> libMesh::n_processors() parts. If you "mpiexec -n 2" and still see
>>> libMesh::n_processors()==1 then there's something wrong with your MPI
>>> configuration. Not sure what, though. Could you have linked with one
>>> version of MPI but then run with mpiexec from a different version?
>>> You might want to use libMesh::out and libMesh::err instead of printf
>>> to stdout; libMesh uses C++ iostreams internally and sets
>>> sync_with_stdio(false), which gives a performance gain on some systems
>>> but which might scramble mixed stdio/iostreams output on others. If
>>> you can't avoid mixing the two, run your libMesh programs with
>>> --sync-with-stdio on the command line.
>>> Monitor your physical, virtual and cloud infrastructure from a single
>>> web console. Get in-depth insight into apps, servers, databases, vmware,
>>> SAP, cloud infrastructure, etc. Download 30-day Free Trial.
>>> Pricing starts from $795 for 25 servers or applications!
>>> Libmesh-users mailing list
> Andrew E. Slaughter, PhD
> Materials Process Design and Control Laboratory
> Sibley School of Mechanical and Aerospace Engineering
> 169 Frank H. T. Rhodes Hall
> Cornell University
> Ithaca, NY 14853-3801
> (607) 229-1829