Menu

MPI parallelism

Support for MPI-based parallelism has been added to the SVN version of the code.

The domain is decomposed using bisection and as such, only 2^k processors are efficiently supported. Ghost layers are used to map particles from other processors and particle data is exchanged only once per force evaluation, i.e. computations on ghost particles are duplicated. Communication scheduling is handled by not doing any explicit communication handling (e.g. graph coloring and the like) at all , but by using MPI's asynchronous communication primitives.

This is not the most efficient approach. In fact, there is a massive amount of literature on domain decomposition, communication scheduling and other algorithms that are much more efficient than the somewhat naive approach used here. This implementation is, however, just a first proof of concept and should not be treated as more than such.

An example/hybrid has been added to show how to set-up (essentially just calling engine_split) and run (using engine_exchange) a distributed-memory parallel simulation with mdcore.

Posted by Pedro Gonnet 2011-06-22

Log in to post a comment.

MongoDB Logo MongoDB