Julian writes:
> The major issue that occurs to me regarding pyMPI is that most parallel
> scripts are by nature long running processes. This conflicts with the nature
> of a server, which is supposed to handle multiple clients and multiple tasks
> simultaneously. This suggests that the server itself would have to either
> queue requests or manage the slave processes. That seems to add a lot of
> complexity.
> Anyway let me know if you think it has any utility for you.
I have been very interested in using pyMPI for exploring other parallel
mechanisms that the prominent SPMD style that MPI favors. There are some
old stubs (incomplete) for SIMD style (sort of a *python for you old
connection machine buffs), a parallel version of the map function() that
works across a distributed array, there is the "remote cooperating objects"
model (rco) that has a working prototype, I'm working on smoothing MPMD
startup for some folks at Argonne, etc....
Here it sounds like you want to have a distributed replacement for what is
typically done with threads. E.g. rank 0 takes requests and passes them
off to off to other workers for completion. In pyMPI the workers would be
distributed processes instead of local threads.
Suppose I introduced the concept of a pyMPI server class. Then you could
write something like:
import mpi
class MyServer(mpi.server):
def __init__(self,root=0):
....
# define what you want to do in the main body
def server(self):
.....
self.spawn(a,b,c,d) # This gets sent to some worker rank
.....
return
# define what you want to do for each client
def client(self, a,b,c,d):
print 'Servicing request on rank',mpi.rank,a,b,c,d
return
# Start server. On rank 0 it is the master, others are slaves
S = MyServer()
S.start()
Here's an example (full code attached):
class MyServer(mpi.Server):
def server(self):
for i in range(10):
self.spawn(i*1,i*2,i*3)
return
def client(self,a,b,c):
print 'on',mpi.rank,'process',a,b,c
return
S = MyServer()
S.start()
output:
on 1 process 0 0 0
on 1 process 2 4 6
on 2 process 1 2 3
on 2 process 3 6 9
on 2 process 5 10 15
on 1 process 4 8 12
on 2 process 7 14 21
on 1 process 6 12 18
on 2 process 9 18 27
on 1 process 8 16 24
A more complete version would need allow the clients themselves to be parallel
(on a sub-communicator) so you could both process requests in parallel and
process each one individually in parallel.
Cheers,
Pat
--
Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller
Patriotism is supporting your country all the time and the government when
it deserves it. -- Mark Twain, author and humorist (1835-1910)
|