pypar-developers Mailing List for pypar - parallel programming with Python (Page 3)
Brought to you by:
uniomni
You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(7) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(4) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
(7) |
Nov
(1) |
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
(4) |
Jun
(4) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
(6) |
Nov
|
Dec
(16) |
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
|
From: Justus S. <jus...@gm...> - 2008-04-11 00:13:24
|
Hi there. In my programs a pypar.receive statement is cpu blocking. Is there a way to not make it block my cpu and just wait until a massage comes in or is it inherent to mpi? Yours, Justus On Apr 7, 2008, at 8:23 PM, Ole Nielsen wrote: > Ah, I see. > Your idea is great. The only question is what system to base it on. > > Let me know how it goes. > > Cheers > Ole > > On Mon, Mar 31, 2008 at 2:25 AM, Justus Schwabedal > <jus...@gm...> wrote: >> >> For what I know you can just help seti by letting them use your >> computer. >> But you cannot commit jobs yourself, which is what I was thinking of. >> >> >> >> >> On Mar 30, 2008, at 4:39 AM, Ole Nielsen wrote: >> Hi again Justus >> >> I think what you are saying is possible, although I suspect other >> frameworks >> migh be better suited for loosely parallel jobs such as SETI@home >> type >> frameworks. MPI (and pypar) is specifically geared towards tightly >> coupled >> parallel jobs on dedicated clusters. >> I guess, I am suggesting that you scope the options before settling >> on >> pypar. >> If you go ahead, though, I'll be very interested in following the >> progress >> and commenting on designs. >> >> Best regards >> Ole >> >> >> On Fri, Mar 21, 2008 at 1:33 PM, Justus Schwabedal >> <jus...@gm...> wrote: >>> >>> >>> Good Morning Ole! >>> I'd be glad to contribute a little although I'm more on the novice >>> then on >> the expert side concerning python. To mpi I'm pretty new, but I >> started to >> actively work with it recently and I'm farely interested. >>> You can probably guess the potentials lying in code distribution. >>> Do you >> think something like a community parallelization would be possible >> with >> python? I'm thinking of something similar to the seti project, but >> such that >> everybody can commit jobs and they are distributed intelligentely. >> What are >> your thoughts? I'm not quite sure about the technical problems >> occuring. >> Afterall it would be absolutely conformal with gnu/linux ideas. One >> would >> have to control cpu and memory usage of the "external processes" >> and the >> rights they are given on the machine. Well, I guess that's beyond >> the scope >> of pypar but also beyond the possible work one could get done in a >> year or >> so. >>> Yours, Justus >>> >>> >>> >>> >>> On Mar 20, 2008, at 2:10 PM, Ole Nielsen wrote: >>> >>> Hi Justus >>> >>> Thank you very much for your mail and your interest in pypar. >>> Pypar has steadily grown and served us and several others over the >>> past >> 6-7 years. >>> Many people have made contributions over the years and the whole >>> idea is >> that pypar should be maintained by a community. >>> >>> I think your ideas are great and it wouldn't hurt to throw them >>> out there >> for some feedback. If you are keen, I could make an account for you >> in the >> subversion repository where pypar lives and your pypar-tools could >> evolve >> from there. That'd be great actually. >>> >>> As for the adhoc implementation, that'll be fine initially with the >> understanding that we'd like to see the code base grow into robust, >> general >> and flexible tools in the future. >>> One thing that I have found crucial is the use of a good unit test >>> suite >> from the word go as well as version control. I can assist with >> either. >>> >>> The idea of distributing the code as (executable I assume) strings >>> is >> definitely novel and interesting. I assume this would be useful for >> distributed systems that don't have a common NFS mounted filesystem >> where >> source code is accessible by all computer nodes. Is that correct? >>> >>> Distributing code objecs could be done, but the underlying >>> implementation >> would serialise the objects as strings using the underlying MPI >> calls. >> Distributing strings is straight forward and probably more efficient. >>> >>> I am not aware of anyone else doing this, but it'd be worth a >>> quick search >> or post to find out. >>> >>> Looking forward to hear more >>> >>> Cheers >>> Ole Nielsen >>> Canberra, Australia >>> >>> >>> >>> On Mon, Mar 17, 2008 at 1:28 AM, Justus Schwabedal >> <jus...@gm...> wrote: >>> >>>> Cheers, >>>> I found out about pypar a couple of weeks ago. I instantly >>>> started to >>>> develop a distributed system which distributes source code in the >>>> form >>>> of strings. I would like to contribute this into the project by >>>> maybe >>>> opening something like pypar-tools. My implementation is kindof >>>> adhoc >>>> and I would like to ask if I can get some tips concerning this >>>> issue. >>>> Can I distribute code objects instead of strings? Would that be >>>> more >>>> efficient? Questions like that are of my concern. Is somebody doing >>>> something similar? >>>> Yours, Justus >>>> >>>> >> ------------------------------------------------------------------------- >>>> This SF.net email is sponsored by: Microsoft >>>> Defy all challenges. Microsoft(R) Visual Studio 2008. >>>> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ >>>> _______________________________________________ >>>> Pypar-developers mailing list >>>> Pyp...@li... >>>> https://lists.sourceforge.net/lists/listinfo/pypar-developers >>>> >>> >>> >>> >> >> >> |
|
From: Justus S. <jus...@gm...> - 2008-03-16 14:28:56
|
Cheers, I found out about pypar a couple of weeks ago. I instantly started to develop a distributed system which distributes source code in the form of strings. I would like to contribute this into the project by maybe opening something like pypar-tools. My implementation is kindof adhoc and I would like to ask if I can get some tips concerning this issue. Can I distribute code objects instead of strings? Would that be more efficient? Questions like that are of my concern. Is somebody doing something similar? Yours, Justus |
|
From: Ole N. <ole...@gm...> - 2008-02-09 11:35:31
|
Hi again James I believe the standard way of ensuring that control info is separated from transmitted date is through the tagging mechanism. On may tag different types of messages differently, This is what we used in the mandelbrot example, I mentioned. However, I am totally open to other suggestions. If you have a simple example that shows the problem, I suggest you post it to the list and we can all look at it and see if indeed it reveals a bug or whether it is just the way the underlying MPI standard is suppose to work. Best regards Ole On Feb 4, 2008 6:50 AM, James Philbin <phi...@gm...> wrote: > Hi Ole, > > I believe the problem occurs because of the control messages that are > sent before any data. Imagine two senders are blocking on a .send(). > The receiver blocks using any_source, the control message is received > but there is nothing in pypar (that I could see) that restricts the > data to come from the same sender as the control message. I hope this > is somewhat clear - I can elaborate further if necessary. I think what > is needed is for the receiver to ensure the control message and data > come from the same source - this is not currently the behaviour with > any_source. > > James > > On Feb 2, 2008 10:46 PM, Ole Nielsen <ole...@gm...> wrote: > > Hi James > > > > Thanks for your mail, > > > > There is a demo that uses any_source in a very simple master-slave code > for > > computing the mandelbrot set. See > > > http://pypar.svn.sourceforge.net/viewvc/pypar/demos/mandelbrot_example/mandel_parallel_dynamic.py > . > > Would you be able to verify if this one runs on your system? > > > > However, there is currently no unit test for that functionality - only > the > > demo which is working correctly. > > > > If you have discovered a bug, the best thing would be to write the > smallest > > possible example that reveals the bug - i.e. an example that > demonstrates > > where it goes wrong. We can then turn that into a unit test and then > address > > the problem. > > > > Bear in mind that Pypar is just a wrapper around a c-implementation of > the > > MPI standard so it relies on that to be correct. > > > > Best regards > > Ole Nielsen > > > > > > > > On Feb 2, 2008 1:13 AM, James Philbin <phi...@gm...> wrote: > > > > > Hi, > > > > > > I think i've been hit by a bug in pypar relating to > > > receive(pypar.any_source). The control message is received from > > > any_source as correct, but then the data should be received from the > > > sender of the control message, not any source. > > > > > > James > > > > > > > > |
|
From: Felix R. <fe...@ph...> - 2008-02-04 16:41:13
|
Hi,
attached is an updated demo3.py to work with Pypar 2.0.2_alpha.
Tested on OpenSUSE 10.2 i586 with OpenMPI 1.2.1.
Felix
#!/usr/bin/env python
"""
Master/Slave Parallel decomposition sample
Run as
python demo3.py
or
mpirun -np 2 demo3.py
(perhaps try number of processors more than 2)
OMN, GPC FEB 2002
"""
import sys
try:
import numpy
except:
raise Exception, 'Module numpy must be present to run pypar'
try:
import pypar
except:
raise Exception, 'Module pypar must be present to run parallel'
print 'Modules numpy, pypar imported OK'
WORKTAG = 1
DIETAG = 2
def master():
numCompleted = 0
print '[MASTER]: I am processor %d of %d on node %s'\
%(MPI_myid, MPI_numproc, MPI_node)
# start slaves distributing the first work slot
for i in range(1, min(MPI_numproc, numWorks)):
work = workList[i]
pypar.send(work, destination=i, tag=WORKTAG)
print '[MASTER]: sent work "%s" to node %d' %(work, i)
# dispatch the remaining work slots on dynamic load-balancing policy
# the quicker to do the job, the more jobs it takes
for work in workList[MPI_numproc:]:
result, status = pypar.receive(source=pypar.any_source, tag=WORKTAG,
return_status=True)
print '[MASTER]: received result "%s" from node %d'\
%(result, status.source)
numCompleted += 1
pypar.send(work, destination=status.source, tag=WORKTAG)
print '[MASTER]: sent work "%s" to node %d' %(work, status.source)
# all works have been dispatched out
print '[MASTER]: toDo : %d' %numWorks
print '[MASTER]: done : %d' %numCompleted
# I've still to take into the remaining completions
while (numCompleted < numWorks):
result, status = pypar.receive(source=pypar.any_source, tag=WORKTAG,
return_status=True)
print '[MASTER]: received (final) result "%s" from node %d'\
%(result, status.source)
numCompleted += 1
print '[MASTER]: %d completed' %numCompleted
print '[MASTER]: about to terminate slaves'
# Tell slaves to stop working
for i in range(1, MPI_numproc):
pypar.send('#', destination=i, tag=DIETAG)
print '[MASTER]: sent termination signal to node %d' %(i, )
return
def slave():
print '[SLAVE %d]: I am processor %d of %d on node %s'\
%(MPI_myid, MPI_myid, MPI_numproc, MPI_node)
while True:
result, status = pypar.receive(source=0, tag=pypar.any_tag,
return_status=True)
print '[SLAVE %d]: received work "%s" with tag %d from node %d'\
%(MPI_myid, result, status.tag, status.source)
if (status.tag == DIETAG):
print '[SLAVE %d]: received termination from node %d'\
%(MPI_myid, 0)
return
else:
result = 'X'+result
pypar.send(result, destination=0, tag=WORKTAG)
print '[SLAVE %d]: sent result "%s" to node %d'\
%(MPI_myid, result, 0)
if __name__ == '__main__':
MPI_myid = pypar.rank()
MPI_numproc = pypar.size()
MPI_node = pypar.get_processor_name()
workList = ('_dummy_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j')
numWorks = len(workList) - 1
#FIXME, better control here
if MPI_numproc > numWorks or MPI_numproc < 2:
pypar.finalize()
if MPI_myid == 0:
print 'ERROR: Number of processors must be in the interval [2,
%d].'%numWorks
sys.exit(-1)
if MPI_myid == 0:
master()
else:
slave()
pypar.finalize()
print 'MPI environment finalized.'
|
|
From: Felix R. <fe...@ph...> - 2008-02-04 13:59:26
|
Hello, at first: Thanks for Pypar! It was very useful for the numerical calculations in my diploma thesis to be able to do simple parallel computing with Python and Scipy. And I still use it, now aiming for a PhD degree :-) I thought it might be interesting for you to know that I currently provide recent OpenSUSE packages with the help of the OpenSUSE Build Service on http://download.opensuse.org/repositories/home:/ferichter/ and will move them to the semi-official scientific software repository http://download.opensuse.org/repositories/science/ if they prove stable, where the older version 1.9.3 still can be found. Best regards, Felix |
|
From: Ole N. <ole...@gm...> - 2008-02-02 22:46:17
|
Hi James Thanks for your mail, There is a demo that uses any_source in a very simple master-slave code for computing the mandelbrot set. See http://pypar.svn.sourceforge.net/viewvc/pypar/demos/mandelbrot_example/mandel_parallel_dynamic.py . Would you be able to verify if this one runs on your system? However, there is currently no unit test for that functionality - only the demo which is working correctly. If you have discovered a bug, the best thing would be to write the smallest possible example that reveals the bug - i.e. an example that demonstrates where it goes wrong. We can then turn that into a unit test and then address the problem. Bear in mind that Pypar is just a wrapper around a c-implementation of the MPI standard so it relies on that to be correct. Best regards Ole Nielsen On Feb 2, 2008 1:13 AM, James Philbin <phi...@gm...> wrote: > Hi, > > I think i've been hit by a bug in pypar relating to > receive(pypar.any_source). The control message is received from > any_source as correct, but then the data should be received from the > sender of the control message, not any source. > > James > |
|
From: Ole N. <ole...@gm...> - 2008-01-25 12:42:33
|
Good pickup - I have fixed it on the web site. Cheers Ole On Jan 18, 2008 12:34 AM, Ilmar Wilbers <il...@si...> wrote: > Hi, > > Just a short tip: the code at the end of the page located at > http://datamining.anu.edu.au/~ole/pypar/<http://datamining.anu.edu.au/%7Eole/pypar/>contains an errror: > pypar.finalize() should not be part of the else block. > > Sincerly, Ilmar > |
|
From: Ole N. <ole...@gm...> - 2008-01-25 12:35:37
|
Hello Bill Thanks for your mail. First up, the raw_forms have been deprecated and replaced by the keyword argument 'buffer'. (See the DOC file under Programming for efficiency: http://pypar.svn.sourceforge.net/viewvc/pypar/documentation/DOC?view=markup This is the same thing and useful when you know that you are transmitting Numeric (numpy) arrays. Does the unit test suite (test_pypar.py) run for you? I suspect demo3 just hasn't been updated. If you can I'd appreciate it muchly. Documentation - uh oh. I wrote the DOC file which gives a brief description of each function and what it returns. I also made some attempts at proper LaTeX documentation. My feeble attempts are available at http://pypar.svn.sourceforge.net/viewvc/pypar/documentation/manuals/ The reason they don't compile is that I never got around to finishing them. Sorry. Non-blocking versions should be very easy to do if you follow the general skeleton of the rest of pypar. Are you sure you really need unblocking sends and receives? Most implementations of MPI actually buffers the message and allows the program to move on. In addition, scalability rarely relies on this feature. But if you wish to implement them, I'd be more than happy for you to get access to the repository and incorporate the functions into pypar. I would love to devote some time to evolving pypar but it has enough functionality for our uses and most of my time goes towards another FOSS project, ANUGA. Let me know if this brief mail was helpful All the best Ole Nielsen. On Jan 24, 2008 6:22 PM, Bill McKie <wil...@na...> wrote: > Hi Ole > > I recently picked up the latest pypar distribution tarballs: > > pypar-2.0.2_alpha_36.tgz > pypar_demos-2.0.2_alpha_36.tgz > pypar_documentation-2.0.2_alpha_36.tgz > > and have been exploring pypar under FC6 & FC8 Linux with openmpi-1.1.4. > > Pypar from pypar-2.0.2_alpha_36.tgz installed well into the expected > /usr/lib/python*/site-packages/ area. > > From pypar_demos-2.0.2_alpha_36.tgz, the demo programs ring_example.py > and demo2.py ran ok with various numbers of MPI processes. > > But demo3.py appears to call some pypar functions that do not exist, > e.g. pypar.Get_processor_name() and pypar.raw_receive(). > > Starting an interactive python session, importing pypar, and looking at > dir(pypar), I see that pypar.get_processor_name (lower case g) is there, > but no pypar.raw_receive. > > Could the demo files in pypar_demos-2.0.2_alpha_36.tgz be out of sync > with the corresponding pypar-2.0.2_alpha_36.tgz distribution? Or did my > pypar install not include all the expected functions? > > I encountered fatal errors when I tried to process the .tex files under > pypar_documentation-2.0.2_alpha_36.tgz with latex. > > Also, I'm wondering is there is a way to use non-blocking MPI send and > receive with pypar? > > Is there documentation that shows what each pypar function returns? > > Thanks, Ole. I really appreciate the design of pypar to not require a > special version of the python interpreter. > > Bill McKie > NASA Ames > > |
|
From: Constantinos M. <cma...@gm...> - 2007-09-11 14:25:26
|
Ole Nielsen wrote: > Hi all Hello everyone ! Hope you did spend a nice summer. I am answering with an important delay, but it seems that I wasn't the only one on holiday :-). > > I was wondering if we, in your opinion, can take the current alpha > release to beta now that most installation issues have been ironed out. > The package itself seems to work fine with numpy on the platforms I > have tested it on. If any of you have had any problems that haven't > been resolved please speak now or hold you peace forever :-) > I know Constantinos has implemented a bsend functinality which hasn't > yet been released. I propose making a beta release of what is in > pypar_2.0.2alpha now and then subsequently roll Constantinos' code > into the following release. That's fine with me. That way I can test a little more the integration of Bsend() with the current version of Pypar and add some examples showing how to use Bsend(). > > Please let me know what you think > Cheers and thanks > Ole Cheers, -- Constantinos |
|
From: Prabhu R. <pr...@ae...> - 2007-08-03 18:40:09
|
>>>>> "Ole" == Ole Nielsen <ole...@gm...> writes:
Ole> Hi all I was wondering if we, in your opinion, can take the
Ole> current alpha release to beta now that most installation
Ole> issues have been ironed out. The package itself seems to
Ole> work fine with numpy on the platforms I have tested it on. If
Ole> any of you have had any problems that haven't been resolved
Ole> please speak now or hold you peace forever :-)
I've been swamped with other things so haven't had a chance. I'd say
go for it. :-)
cheers,
prabhu
|
|
From: Ole N. <ole...@gm...> - 2007-08-02 12:24:25
|
Hi all I was wondering if we, in your opinion, can take the current alpha release to beta now that most installation issues have been ironed out. The package itself seems to work fine with numpy on the platforms I have tested it on. If any of you have had any problems that haven't been resolved please speak now or hold you peace forever :-) I know Constantinos has implemented a bsend functinality which hasn't yet been released. I propose making a beta release of what is in pypar_2.0.2alpha now and then subsequently roll Constantinos' code into the following release. Please let me know what you think Cheers and thanks Ole |
|
From: Ole N. <ole...@gm...> - 2007-07-19 06:01:48
|
Vinu
Can you confirm that the original demo3 runs correctly on your system with
tags being passed around the way they should?
Cheers
Ole
On 7/14/07, Vinu Vikram <vv...@gm...> wrote:
>
> Hi Nielsen
> I have started using the pypar for doing cluster computing. Since I am
> very new to this field I was learning things and try to understand the
> demos from pypar website. I have modified the demo3.py and tried to run.
> It is working except in the case of tag. The slave is not correctly
> getting the tag which the master sends to it. The slaves always show the
> tag=1. I am attaching he program (test3.py) and the output ( 661.out) from
> it. Could you please help to sort out the problem.
> Thanks
> Vinu V.
>
>
>
>
> #!/data/home/vvinu/software/local/bin/python
>
> import numarray as n
> import sys
> import Numeric
> import pypar
>
> WORKTAG = 1
> DIETAG = 2
>
>
> def master():
> numCompleted = 0
>
> print "[MASTER]: I am processor %d of %d on node %s\n" %(MPI_myid,
> MPI_numproc,
> MPI_node)
>
> # start slaves distributing the first work slot
> for i in range(1, min(MPI_numproc, numWorks)):
> work = workList[i]
> pypar.send(work, i)
> print "[MASTER]: sent work '%f' to node '%d'\n" %(work, i)
>
> # dispach the remaining work slots on dynamic load-balancing policy
> # the quicker to do the job, the more jobs it takes
> for work in workList[MPI_numproc:]:
> R, status = pypar.receive(pypar.any_source , return_status=True)
> print "[MASTER]: received result '%f' from node '%d'\n" %(R,
> status.source)
> numCompleted += 1
> pypar.send(work, status.source)
> print "[MASTER]: sent work '%f' to node '%d'\n" %(work,
> status.source)
>
> # all works have been dispatched out
> print "[MASTER]: toDo : %d\n" %numWorks
> print "[MASTER]: done : %d\n" %numCompleted
>
> # I've still to take into the remaining completions
> while(numCompleted < numWorks):
> R, status = pypar.receive(pypar.any_source, return_status=True)
> print "[MASTER]: received (final) result '%f' from node '%d'\n"
> %(R,
> status.source)
> numCompleted += 1
> print "[MASTER]: %d completed\n" %numCompleted
>
> print "[MASTER]: about to terminate slaves\n"
>
> # say slaves to stop working
> for i in range(1, MPI_numproc):
> pypar.send(0, i)
> print "[MASTER]: sent (final) work '%f' to node '%d'\n" %(0, i)
>
> return
>
> def slave():
>
> print "[SLAVE %d]: I am processor %d of %d on node %s\n" %(MPI_myid,
> MPI_myid,
> MPI_numproc, MPI_node)
>
> while 1:
> R, status = pypar.receive(pypar.any_source, pypar.any_tag,
> return_status=True)
> print "[SLAVE %d]: received work '%f' with tag '%d' from node
> '%d'\n"\
> %(MPI_myid, R, status.tag, status.source)
>
> if (R == 0):
> print "[SLAVE %d]: received termination from node '%d'\n"
> %(MPI_myid, 0)
> return
> else:
> A = R * R
> pypar.send(A, 0)
> print "[SLAVE %d]: sent result '%f' to node '%d'\n"
> %(MPI_myid, A, 0)
>
>
>
> if __name__ == '__main__':
> MPI_myid = pypar.rank()
> MPI_numproc = pypar.size()
> MPI_node = pypar.Get_processor_name()
>
> # workList = ('_dummy_', 'a', 'b', 'c')
> workList = n.array([1,2,3,4,5,6,8])
> numWorks = len(workList) - 1
>
>
> #FIXME, better control here
> if MPI_numproc > numWorks or MPI_numproc < 2:
> pypar.Finalize()
> if MPI_myid == 0:
> print "ERROR: Number of processors must be in the interval
> [2,%d].\n"
> %numWorks
>
> sys.exit(-1)
>
> if MPI_myid == 0:
> master()
> else:
> slave()
>
> pypar.Finalize()
> print "MPI environment finalized.\n"
>
> Pypar (version 2.0alpha) initialised MPI OK with 3 processors
> [SLAVE 1]: I am processor 1 of 3 on node n1
> [MASTER]: I am processor 0 of 3 on node n1
>
>
> [SLAVE 2]: I am processor 2 of 3 on node n1
>
> [MASTER]: sent work '2.000000' to node '1'
>
> [MASTER]: sent work '3.000000' to node '2'
>
> [SLAVE 1]: received work '2.000000' with tag '1' from node '0'
>
> [SLAVE 2]: received work '3.000000' with tag '1' from node '0'
>
> [SLAVE 1]: sent result ' 4.000000' to node '0'
>
> [SLAVE 2]: sent result '9.000000' to node '0'
>
> [MASTER]: received result '4.000000' from node '1'
>
> [MASTER]: sent work '4.000000' to node '1'
> [SLAVE 1]: received work '4.000000' with tag '1' from node '0'
>
> [SLAVE 1]: sent result '16.000000' to node '0'
>
> [MASTER]: received result '9.000000' from node '2'
>
> [MASTER]: sent work '5.000000' to node '2'
>
> [SLAVE 2]: received work '5.000000' with tag '1' from node '0'
>
> [SLAVE 2]: sent result '25.000000' to node '0'
>
> [MASTER]: received result '16.000000' from node '1'
>
> [MASTER]: sent work '6.000000' to node '1'
>
> [SLAVE 1]: received work '6.000000' with tag '1' from node '0'
>
> [SLAVE 1]: sent result '36.000000' to node '0'
> [MASTER]: received result '25.000000' from node '2'
>
> [MASTER]: sent work '8.000000' to node '2'
>
> [SLAVE 2]: received work ' 8.000000' with tag '1' from node '0'
>
> [MASTER]: toDo : 6
>
> [MASTER]: done : 4
>
> [SLAVE 2]: sent result '64.000000' to node '0'
>
> [MASTER]: received (final) result ' 36.000000' from node '1'
>
> [MASTER]: 5 completed
>
> [MASTER]: received (final) result '64.000000' from node '2'
>
> [MASTER]: 6 completed
>
> [MASTER]: about to terminate slaves
>
> [MASTER]: sent (final) work '0.000000' to node '1'
>
> [SLAVE 1]: received work '0.000000' with tag '1' from node '0'
>
> [SLAVE 1]: received termination from node '0'
>
> [MASTER]: sent (final) work '0.000000' to node '2'
>
> [SLAVE 2]: received work '0.000000' with tag '1' from node '0'
>
> [SLAVE 2]: received termination from node '0'
>
> MPI environment finalized.
>
> MPI environment finalized.
>
> MPI environment finalized.
>
>
> --
> VINU VIKRAM
> http://iucaa.ernet.in/~vvinuv/ <http://iucaa.ernet.in/%7Evvinuv/>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> Pypar-developers mailing list
> Pyp...@li...
> https://lists.sourceforge.net/lists/listinfo/pypar-developers
>
>
|
|
From: Vinu V. <vv...@gm...> - 2007-07-13 17:14:25
|
Hi Nielsen
I have started using the pypar for doing cluster computing. Since I am
very new to this field I was learning things and try to understand the
demos from pypar website. I have modified the demo3.py and tried to run.
It is working except in the case of tag. The slave is not correctly
getting the tag which the master sends to it. The slaves always show the
tag=1. I am attaching he program (test3.py) and the output (661.out) from
it. Could you please help to sort out the problem.
Thanks
Vinu V.
#!/data/home/vvinu/software/local/bin/python
import numarray as n
import sys
import Numeric
import pypar
WORKTAG = 1
DIETAG = 2
def master():
numCompleted = 0
print "[MASTER]: I am processor %d of %d on node %s\n" %(MPI_myid,
MPI_numproc,
MPI_node)
# start slaves distributing the first work slot
for i in range(1, min(MPI_numproc, numWorks)):
work = workList[i]
pypar.send(work, i)
print "[MASTER]: sent work '%f' to node '%d'\n" %(work, i)
# dispach the remaining work slots on dynamic load-balancing policy
# the quicker to do the job, the more jobs it takes
for work in workList[MPI_numproc:]:
R, status = pypar.receive(pypar.any_source, return_status=True)
print "[MASTER]: received result '%f' from node '%d'\n" %(R,
status.source)
numCompleted += 1
pypar.send(work, status.source)
print "[MASTER]: sent work '%f' to node '%d'\n" %(work,
status.source)
# all works have been dispatched out
print "[MASTER]: toDo : %d\n" %numWorks
print "[MASTER]: done : %d\n" %numCompleted
# I've still to take into the remaining completions
while(numCompleted < numWorks):
R, status = pypar.receive(pypar.any_source, return_status=True)
print "[MASTER]: received (final) result '%f' from node '%d'\n" %(R,
status.source)
numCompleted += 1
print "[MASTER]: %d completed\n" %numCompleted
print "[MASTER]: about to terminate slaves\n"
# say slaves to stop working
for i in range(1, MPI_numproc):
pypar.send(0, i)
print "[MASTER]: sent (final) work '%f' to node '%d'\n" %(0, i)
return
def slave():
print "[SLAVE %d]: I am processor %d of %d on node %s\n" %(MPI_myid,
MPI_myid,
MPI_numproc, MPI_node)
while 1:
R, status = pypar.receive(pypar.any_source, pypar.any_tag,
return_status=True)
print "[SLAVE %d]: received work '%f' with tag '%d' from node
'%d'\n"\
%(MPI_myid, R, status.tag, status.source)
if (R == 0):
print "[SLAVE %d]: received termination from node '%d'\n"
%(MPI_myid, 0)
return
else:
A = R * R
pypar.send(A, 0)
print "[SLAVE %d]: sent result '%f' to node '%d'\n" %(MPI_myid,
A, 0)
if __name__ == '__main__':
MPI_myid = pypar.rank()
MPI_numproc = pypar.size()
MPI_node = pypar.Get_processor_name()
# workList = ('_dummy_', 'a', 'b', 'c')
workList = n.array([1,2,3,4,5,6,8])
numWorks = len(workList) - 1
#FIXME, better control here
if MPI_numproc > numWorks or MPI_numproc < 2:
pypar.Finalize()
if MPI_myid == 0:
print "ERROR: Number of processors must be in the interval
[2,%d].\n"
%numWorks
sys.exit(-1)
if MPI_myid == 0:
master()
else:
slave()
pypar.Finalize()
print "MPI environment finalized.\n"
Pypar (version 2.0alpha) initialised MPI OK with 3 processors
[SLAVE 1]: I am processor 1 of 3 on node n1
[MASTER]: I am processor 0 of 3 on node n1
[SLAVE 2]: I am processor 2 of 3 on node n1
[MASTER]: sent work '2.000000' to node '1'
[MASTER]: sent work '3.000000' to node '2'
[SLAVE 1]: received work '2.000000' with tag '1' from node '0'
[SLAVE 2]: received work '3.000000' with tag '1' from node '0'
[SLAVE 1]: sent result '4.000000' to node '0'
[SLAVE 2]: sent result '9.000000' to node '0'
[MASTER]: received result '4.000000' from node '1'
[MASTER]: sent work '4.000000' to node '1'
[SLAVE 1]: received work '4.000000' with tag '1' from node '0'
[SLAVE 1]: sent result '16.000000' to node '0'
[MASTER]: received result '9.000000' from node '2'
[MASTER]: sent work '5.000000' to node '2'
[SLAVE 2]: received work '5.000000' with tag '1' from node '0'
[SLAVE 2]: sent result '25.000000' to node '0'
[MASTER]: received result '16.000000' from node '1'
[MASTER]: sent work '6.000000' to node '1'
[SLAVE 1]: received work '6.000000' with tag '1' from node '0'
[SLAVE 1]: sent result '36.000000' to node '0'
[MASTER]: received result '25.000000' from node '2'
[MASTER]: sent work '8.000000' to node '2'
[SLAVE 2]: received work '8.000000' with tag '1' from node '0'
[MASTER]: toDo : 6
[MASTER]: done : 4
[SLAVE 2]: sent result '64.000000' to node '0'
[MASTER]: received (final) result '36.000000' from node '1'
[MASTER]: 5 completed
[MASTER]: received (final) result '64.000000' from node '2'
[MASTER]: 6 completed
[MASTER]: about to terminate slaves
[MASTER]: sent (final) work '0.000000' to node '1'
[SLAVE 1]: received work '0.000000' with tag '1' from node '0'
[SLAVE 1]: received termination from node '0'
[MASTER]: sent (final) work '0.000000' to node '2'
[SLAVE 2]: received work '0.000000' with tag '1' from node '0'
[SLAVE 2]: received termination from node '0'
MPI environment finalized.
MPI environment finalized.
MPI environment finalized.
--
VINU VIKRAM
http://iucaa.ernet.in/~vvinuv/
|
|
From: Ole N. <ole...@gm...> - 2007-07-06 13:50:58
|
pypar 2.0alpha released on sourceforge. Pypar is a simple and efficient MPI binding for Python Copyright (C) 2001-2007 Ole M. Nielsen ------------------------------------------------------- This is to announce an upgrade of pypar, ver 2.0 alpha - a simple and efficient Python binding to MPI for parallel programming using Python. Version 1.0 was announced on this mailing list on 7th of February 2002 http://mail.python.org/pipermail/python-announce-list/2002-February/001228.html. Pypar has been used in many projects over the years but it became clear that relying on the Numeric was becoming a liability and many developers requested an upgrade to numpy. The update to version 2.0alpha signifies 1: Porting pypar to numpy instead of the discontinued Numeric module 2: Moving pypar to sourceforge: http://sourceforge.net/projects/pypar/ 3: Numerous improvements and optimisations added over the past years Version 2.0alpha has been tested on a few platforms, but I haven't been able to verify that it installs everywhere. The purpose of this post is to encourage existing and new users of pypar to try the new release and to get back to me with questions, feedback and patches that will allow pypar to run on as many platforms as possible. I am looking forward to hear from you Ole M. Nielsen Canberra, Australia Ole...@gm... Background: ----------- The use of multi processor computers is becoming increasingly common and they appear in many forms: Desktop computers with more than one processor sharing memory, clusters of PC's connected with fast networks known as Beowulf clusters, and high-end super computers all make use of parallelism. Even playstations have been connected to form computational networks (http://arrakis.ncsa.uiuc.edu/ps2/cluster.php). To efficiently use these machines in a portable way one must be able to control communication among programs running in parallel. One such standard is the Message Passing Interface (MPI) for inter-processor communication. Python and MPI: --------------- There are a number of other Python bindings to MPI that are more comprehensive than pypar (PyMPI, Scientific Python). However, pypar stands out by not requiring the Python interpreter to be modified, by being very easy to install and by by shielding the user from many details involving data types and MPI parameters without sacrificing the full bandwidth provided by the underlying MPI implementation. Download: --------- Pypar can be downloaded from http://sourceforge.net/projects/pypar Credentials: ------------ Pypar was developed by Ole Nielsen at the Australian National University in 2001 for use in the APAC Data Mining Expertise Program and has been published under the GNU Public License (http://www.gnu.org/licenses/gpl.txt) Contact: Ole...@gm... <Ole...@an...> <P><A HREF="http://sourceforge.net/projects/pypar" <http://datamining.anu.edu.au/pypar>>Pypar 2.0alpha</A> - A simple and efficient MPI binding for Python. (07-July-07) |
|
From: Mandus <ma...@gm...> - 2007-07-06 12:56:41
|
Hi, just want to confirm that the maillinglist seems to work. Hopefully, I get around to test the new pypar on OSX one of these days as well :) =C5smund On 7/6/07, Ole Nielsen <ole...@gm...> wrote: > Dear all > > This is to test the pypar-developers mailing list and mostly a copy of th= e > mail I sent out > to selected recipients on the 3rd of July 2007. > > As you are aware, Pypar (parallel programming with Python) has been > migrated from the now obsolete Numeric package to the new numpy module. > I have verified that the new distribution passes all the tests in > test_pypar.py. However, I have been advised by Jon that it didn't install= on > AMD64 until he modified setup.py with the attached patch. Also there may = be > issues with installing pypar on Windows. > > I would be grateful if you guys could check if this version works for you > and for any feedback you may have. Once, we have made sure that pypar > installs on most major platforms, we can go for the beta release! > > The new version has been named pypar_2.0alpha_28 on sourceforge ( > http://sourceforge.net/projects/pypar). > The number 2.0alpha is the major release number, the number 28 refers to = the > Subversion revision (pypar was moved to sourceforge's svn repository last > month. Previous CVS log files are archived in the new repository). > The specific download of pypar is available at > http://downloads.sourceforge.net/pypar/pypar-2.0alpha_28.tgz > and that will provide all you need to get going, including testing the py= par > package. > > There are two more packages on sourceforge. They are > http://downloads.sourceforge.net/pypar/pypar_demos-2.0alpha_28.tgz (demos= ) > and > http://downloads.sourceforge.net/pypar/pypar_documentation-2.0alpha_28.tg= z(DOC, > FAQ, and LaTeX drafts). > > I tested that all the demos work except for the mandelbrot example which > relies on PIL and hasn't got the parallel versions anyway. I will fix tha= t > as soon as we have made sure that the new distribution works. The > documentation is (typically) very much work in progress. > > Thanks you very much for your contributions > Ole Nielsen > Geoscience Australia > --=20 Mandus Take heed unto thyself, and unto the doctrine |
|
From: Ole N. <ole...@gm...> - 2007-07-05 12:33:43
|
Hi all
Here's a patch from Jon for installing pypar on AMD64.
My hope is that we can iron out any installation problems collectively, so
that pypar 2.0 with Numpy support can go beta soon.
Cheers and thanks everyone
Ole
---------- Forwarded message ----------
From: Jon Nilsen <j.k...@us...>
Date: Jul 4, 2007 7:16 AM
Subject: Re: Pypar 2.0alpha (migrated to numpy) released
To: Ole Nielsen <ole...@gm...>
Ole Nielsen wrote:
> Hello everyone
>
> As most of you are aware, Pypar (parallel programming with Python) has
> been migrated from the now obsolete Numeric package
> to the new numpy module. I have verified that the distribution installs
> using setup.py (thanks Prabhu) and passes the tests in test_pypar.py.
> However, I would be grateful if you guys could check if this version
> works for you and for any feedback you may have. If it all works, we can
> go for the beta release!
>
It almost work very well. I ran into some problems when building pypar
with python-2.5 compiled with gcc on an x86_64 linux box. The setup was
in this case hard coded for pgcc. In addition the extra_compile_flag
variable should be a list, not a string.
Otherwise this version works perfectly for me :) Nice work with getting
pypar into sourceforge!
I've attached a patch to pypar-2.0alpha_28.tgz, checking which compiler
is used and setting the flags accordingly. Should I upload the changes
to svn myself or should I hold it until more feedback is sent?
Cheers,
JOn
--
Jon Kristian Nilsen
Position: PhD Student, particle physics
Office: Department of Physics, University of Oslo
P.b. 1048 Blindern
N-0316 Oslo, Norway
Phone: +4722856434 / +4740203659
|
|
From: Ole N. <ole...@gm...> - 2007-07-03 12:17:43
|
Hello everyone As most of you are aware, Pypar (parallel programming with Python) has been migrated from the now obsolete Numeric package to the new numpy module. I have verified that the distribution installs using setup.py (thanks Prabhu) and passes the tests in test_pypar.py. However, I would be grateful if you guys could check if this version works for you and for any feedback you may have. If it all works, we can go for the beta release! The new version has been named pypar_2.0alpha_28 on sourceforge ( http://sourceforge.net/projects/pypar). The number 2.0alpha is the major release number, the number 28 refers to the Subversion revision (pypar was moved to sourceforge's svn repository last month. Previous CVS log files are archived in the new repository). The specific download of pypar is available at http://downloads.sourceforge.net/pypar/pypar-2.0alpha_28.tgz and that will provide all you need to get going, including testing the pypar package. There are two more packages on sourceforge. They are http://downloads.sourceforge.net/pypar/pypar_demos-2.0alpha_28.tgz (demos) and http://downloads.sourceforge.net/pypar/pypar_documentation-2.0alpha_28.tgz(DOC, FAQ, and LaTeX drafts). I tested that all the demos work except for the mandelbrot example which relies on PIL and hasn't got the parallel versions anyway. I will fix that as soon as we have made sure that the new distribution works. The documentation is (typically) very much work in progress. Thanks you very much for your contributions Ole Nielsen Geoscience Australia |