On Thu, 24 Jan 2008, Wolfgang Bangerth wrote:
> Now let's see what our friends from the competition have to say ;-)
Competition? Does this mean you've added triangles, tets, prisms and
pyramids while we weren't looking? ;-)
I think you've basically covered it - the biggest memory expenditure
is typically the system matrix, which when running on clusters should
be the same for either library. The only things I can think of to add
libMesh does have hooks into matrix-free solvers to avoid that memory
usage, but they're fairly new; if you want to see an example using
them you'll have to check the mailing list as I don't think Ben's
committed it to the SVN head yet.
libMesh also does now have a distributed ParallelMesh, but it's also
new, SVN head only, and hasn't been tested on huge problems yet.
Still, if you're solving on a 256-CPU cluster, it will be a nice
option to have one copy of the mesh (plus a copy or three of each
"ghost" element and node) stored instead of 256 copies.
libMesh just hooks to PETSc and LASPACK for sparse linear algebra,
whereas deal.II has its own multithreaded linear solvers (which IIRC
were more efficient than PETSc?) for shared memory systems. If you're
running on a four-core workstation, for example, I think deal.II only
needs to store one copy of the mesh, where with libMesh you need to
either use ParallelMesh or you end up storing four copies of a
SerialMesh, one for each process. Can deal.II mix multithreading with
PETSc? If so then they've got the same advantage on clusters.
As John pointed out on the libMesh list (what, no cross-post to the
deal.II list? you can't start a fun flamewar on just one mailing
list...), your memory usage depends not just on how many DoFs you have
but on how they're connected. That's complicated, though - using tets
instead of hexes generally reduces your sparse matrix bandwidth
slightly, and reduces the number of pointers stored per element, but I
think the increase in the number of elements stored probably makes the
memory usage worse overall.