[Pympi-users] pyMPI Newbie question
Status: Alpha
Brought to you by:
patmiller
|
From: Pat M. <pat...@ll...> - 2004-12-10 16:12:06
|
First of all, thanks for using pyMPI -- I truly love
Python and parallel programming, so it is nice to
be able to work on both at the same time.
-- Pat
********************************************************
Greetings all... hadn't noticed that anyone was actually
signed into the pympi-users list and had not actually
subscribed myself way back when the list was created [oops!].
Only came [re]aware of its existance when a large message
bounced into the moderators [me] box.
So, anyway, now at least I'm lurking on the list and can
help with any problems.
********************************
Now to actually giving some useful advice...
There are two easy ways to spread information in pyMPI.
The first is bcast(). This sends the same information
to all processors. This typically looks like:
if mpi.rank == 0:
xyz = <<< do some work here >>
info = mpi.bcast(xyz)
else:
info = mpi.bcast()
The other typical way is to use scatter(). This is more appropriate
for arrays and lists....
if mpi.rank == 0:
orginal_A = << some work >>
A = mpi.scatter(original_A)
else:
A = mpi.scatter()
In the above, each rank gets a [nearly] equal piece of the array.
---- %< -----------------------------------
The flip side of spreading stuff is to bring it back together.
One way is to use reduce (or allreduce). Say you want the
average of a value spread over the ranks...
avg = mpi.allreduce(x,mpi.SUM)/mpi.size
If you want to concatenate a bunch of lists spread over the ranks,
use gather (or allgather).
big_list = mpi.gather(small_lists) # Note: big_list is None on rank 1, 2, 3, ...
If you want one item from each rank, do
all_items = mpi.gather([item])
Pat
--
Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller
The most dangerous of all falsehoods is a slightly distorted truth.
-- G.C. Lichtenberg (1742-1799)
|