From: Roy Stogner <roystgnr@ic...> - 2013-10-30 20:58:18
On Wed, 30 Oct 2013, ernestol wrote:
> Dr Oden suggested me to check with you
Normally writing to libmesh-users is a superset of checking with me;
sorry I've been too swamped to jump in this week.
> The interesting part is that when I run the code (for Nelx,Nely and Nelz>150)
> in 1 ,2 or 3 processors the code run completely. But in 4 or more I have some
> memory issues.
Yup - any serialized data structures end up being replicated on each
MPI rank you use; this is a problem when your MPI ranks are splitting
up a single pool of memory on a node. That was one of the motivations
for adding threading support to libMesh, IIRC.
> I saw the parallel_mesh but the code didnt work even with it. Note sure if I
> can use because the warning "Don't use this class unless you're developing or
> debugging it."
ParallelMesh is actually pretty stable now, but what's killing you is
that the meshing support utilities aren't all distributed-memory
parallelized yet. In particular, I believe build_cube still builds
and partitions a serialized mesh first and then distributes that;
you're dying before getting to the distribution stage.
One thing you might try to get a ParallelMesh working is to start by
pulling a coarser mesh out of build_cube(), then doing a few uniform
refinements on that to get your fine mesh. Only the coarsest mesh
ever exists in a serialized state in that case.
> I wonder how can I do with my code to solve bigger problens. I sent the same
> question to libmesh user forum but seens to be an problem in the partition.
Yup; you're fitting the unpartitioned mesh on one processor but then
the extra overhead of (Par)Metis and our interface to them pushes you
over the top. It looks like people are getting better results
switching to the space-filling-curve partitioner, though; have you
tried that yet?
mesh.partitioner() = AutoPtr<Partitioner>(new SFCPartitioner());
before the build_cube() call.
Get latest updates about Open Source Projects, Conferences and News.