pympi-users Mailing List for MPI Python (Page 4)
Status: Alpha
Brought to you by:
patmiller
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(3) |
Dec
(8) |
2005 |
Jan
(9) |
Feb
(4) |
Mar
(3) |
Apr
(1) |
May
|
Jun
(2) |
Jul
(16) |
Aug
(11) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
(7) |
Aug
(2) |
Sep
|
Oct
|
Nov
(7) |
Dec
(4) |
2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
(4) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Julian C. <rjc...@cs...> - 2005-07-22 15:30:41
|
Pat Another question re something I can't seem to get right: If CancelArray is a local list (each of length 5 here), I would expect to be able to gather them onto the root using: >>> GCancelArray = mpi.gather(CancelArray) However I always get errors. I looked through the docs and can't see anything wrong. See below: tks again Julian Cook >>> mpi.synchronizedWrite(mpi.rank,CancelArray,"\n") 0 [-789.73324828817942, -464.6985882238028, 16.309153244928655, -266.37351222857797, 18.729736717962286] 1 [22.883099073570527, -13.504610959914903, -29.992427874816066, 66.603246745603798, 17.72761679717194] .. .. 8 [-447.53648850008159, -1918.500086573147, 17.697240048083117, -30.830132520298719, 19.180049636673306] 9 [-713.58277359050896, -15.962444427288318, -305.60046463383247, -207.9735194 3615666, -132.35710231581811] >>> GCancelArray = mpi.gather(CancelArray) Traceback (most recent call last): File "<stdin>", line 1, in ? EOFError >>> GCancelArray = mpi.gather([CancelArray]) Traceback (most recent call last): File "<stdin>", line 1, in ? EOFError |
From: Julian C. <rjc...@cs...> - 2005-07-22 15:07:31
|
Where is the best place to look up the complete list of pympi routines and arguments? I realised that I've been missing some like mpi.synchronizedWrite, which may not be in the C version of mpi. Do I have to look in the source code, or is there somwhere else I can look? Tks Julian Cook Majority of pyMPI routines listed here, without args: mpi. allgather, mpi.allreduce, mpi.barrier, mpi.bcast, mpi.cancel, mpi.cart_create, mpi.comm_create, mpi.comm_dup, mpi.comm_rank, mpi.comm_size, mpi.communicator, mpi.deltaT, mpi.finalize, mpi.finalized, mpi.free, mpi.gather, mpi.initialized, mpi.irecv, mpi.isend, mpi.map, mpi.mapserver, mpi.mapstats, mpi.name, mpi.procs, mpi.rank, mpi.recv, mpi.reduce, mpi.scan, mpi.scatter, mpi.send, mpi.sendrecv, mpi.size, mpi.synchronizeQueuedOutput, mpi.synchronizedWrite, mpi.test, mpi.test_cancelled, mpi.tick, mpi.version, mpi.wait, mpi.waitall, mpi.waitany, mpi.wtick, mpi.wtime, mpi.WORLD, mpi.COMM_WORLD, mpi.COMM_NULL, mpi.BAND, etc.. |
From: Julian C. <rjc...@cs...> - 2005-07-22 01:15:53
|
This is just a post to document something Pat emailed to me 1. You call a parallel function and get back a double called FinalVal e.g. >>> FinalVal = McTermIncrAccCall(dictInputs) 2. FinalVal is a local variable on each CPU. You can view them all with: >>> mpi.synchronizedWrite( mpi.rank, FinalVal, "\n" ) 0 -291.326756623 1 12.493514467 2 -184.882043854 3 -508.240980905 4 -300.15308712 5 -240.614966934 6 -71.3924387702 7 -113.294540179 8 -462.743023119 9 -269.701236158 2. What you really want to do is concatenate them into a list on the root processor. To do this you use; >>> allValues = mpi.gather([FinalVal]) allValues is now a list containing all the local values of FinalVal. "allvalues" only exists on the machine who's mpi.rank == 0. Don't ask me why you need to address FinalVal as [FinalVal], possibly a one element list?, but it definitely works. Pat Miller: "this is equivalent to mpi.gather([finalVal],0) and there is an allgather equivalent too." Julian Cook |
From: Julian C. <rjc...@cs...> - 2005-07-21 19:49:03
|
Tks Pat I tried it the way you suggested below. It runs, but seems to run way too quickly, even over 10 processors. Maybe I'm just a pessimist: >>> FinalVal, CancelArray = McTermIncrAccCall(dictInputs) >>> mpi.synchronizedWrite(mpi.rank,FinalVal,"\n") 0 -291.326756623 1 12.493514467 2 -184.882043854 3 -508.240980905 4 -300.15308712 5 -240.614966934 6 -71.3924387702 7 -113.294540179 8 -462.743023119 9 -269.701236158 P.S. I FINALLY noticed that mpi.synchronizedWrite and mpi.synchronizeQueuedOutput('/dev/null') exist. I had always struggled with too much console output. I guess I never bothered to look at the entire API.... Is there any way to gather FinalVal into a list on the root? Julian -----Original Message----- From: pym...@li... [mailto:pym...@li...]On Behalf Of Pat Miller Sent: Thursday, July 21, 2005 3:00 PM Cc: pym...@li... Subject: Re: [Pympi-users] Mpi and mpi.allreduce inside a function I tend to write things the second way def localComputation(...): ... localValue = localComputation() globalValue = mpi.allreduce(localValue, mpi.SUM) ........[snip] -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Access to power must be confined to those who are not in love with it. -- Plato ------------------------------------------------------- SF.Net email is sponsored by: Discover Easy Linux Migration Strategies from IBM. Find simple to follow Roadmaps, straightforward articles, informative Webcasts and more! Get everything you need to get up to speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users |
From: Pat M. <pat...@ll...> - 2005-07-21 18:59:57
|
I tend to write things the second way def localComputation(...): ... localValue = localCompuation() globalValue = mpi.allreduce(localValue) I do this because it is more clear what is done locally and simplifies the collection. But, it is better encapsulation to do something like: try: import mpi except ImportError: mpi = None ... def Compuatation(...): value = ... if mpi is not None: value = mpi.allreduce(value) return value That way, you can call computation even if pyMPI isn't running. Pat Julian wrote: > Would it have to be something like: > > GSum = mpi.allreduce(CalculateMcPrice(nSims,nSteps, Spot, Strike, Drift , vSqrdt ),mpi.SUM) > GPrice = GSum/nSims > > or simply? > > Local_price = CalculateMcPrice(nSims,nSteps, Spot, Strike, Drift , vSqrdt ) > GSum = mpi.allreduce(Local_price, mpi.SUM) > GPrice = GSum/nSims -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Access to power must be confined to those who are not in love with it. -- Plato |
From: Julian C. <rjc...@cs...> - 2005-07-21 18:13:44
|
I m confused about the semantics of using mpi inside a python function The inline [non-function] code fragment might look like this (ignoring random number initialisation) ------------------------------------------------------------------ # start simulation on 10 cpu's, where each cpu only does 1/10 of simulations MCprice = 0.0 StPayoff = 0.0 Sum = 0.0 for i in range(1,nSims/mpi.size): St = Spot for j in range(1,nSteps): MCprice = ltqnorm(rmpi.random()) St = St * math.exp(Drift + vSqrdt * MCprice) StPayoff = CalcCallPayoff(St,Strike) Sum = Sum + StPayoff GSum = mpi.allreduce(Sum,mpi.SUM) if mpi.rank == 0: price = GSum/nSims pv = math.exp(-(rf*q/yr2)) price = pv * price ------------------------------------------------------------------ The next obvious thing is to put it into a function e.g. def CalculateMcPrice(nSims,nSteps, Spot, Strike, Drift , vSqrdt ) # inside the function, same code exists, except at the end you have the return statement ... return price but in mpi where is the mpi.allreduce statement placed? Would it have to be something like: GSum = mpi.allreduce(CalculateMcPrice(nSims,nSteps, Spot, Strike, Drift , vSqrdt ),mpi.SUM) GPrice = GSum/nSims or simply? Local_price = CalculateMcPrice(nSims,nSteps, Spot, Strike, Drift , vSqrdt ) GSum = mpi.allreduce(Local_price, mpi.SUM) GPrice = GSum/nSims tks Julian Cook |
From: Julian C. <rjc...@cs...> - 2005-07-12 16:42:29
|
FYI I ran a 10 way Monte-carlo simulation (Option pricing) on a Sunfire 1280. I also compared it to the one node case. If you are interested I used mpich-1.2.6 with pyMPI 2.1 b4 & Python-2.4 on solaris. Used following command lines: time mpirun -np 10 pyMPI StartupMPI.py = 148.73u 0.73s 2:31.52 98.6% time mpirun -np 1 pyMPI StartupMPI.py = 15.41u 1.25s 0:23.02 72.3% When I looked at the times, the actual CPU speedup looks like an impressive 9.65x (148.73 / 15.41) over the single cpu run. On the other hand the elapsed time 2:31.52 / 0:23.02 is only a 6.5 speedup. I would probably have to use LAM to remove the startup lag (If that's the difference), but I only have MPICH installed at the moment. Anyway, it's a lot faster. If anyone is interested in seeing the python code let me know. Julian Cook |
From: Pat M. <pat...@ll...> - 2005-06-10 17:49:35
|
I've answered build questions below, but thought I would add a bit on pyMPI in the classroom first... I've used pyMPI in a classrooom environment.... I think its a great way to learn about MPI without the hassle of doing it with C or Fortran. I was able to concentrate on the parallelism instead of burying the students in the opaque API requirements [but Prof Miller, whats a communicator? whats a tag? why do I have to specify the type?] This is particularly spiffy since pyMPI hides a lot of spurious arguments, so you can introduce point-to-point with something as simple as: % mpirun -np 2 pyMPI Python 2.4 (pyMPI 2.4b1) on linux2 Type "help", "copyright", "credits" or "license" for more information. Python 2.4 (pyMPI 2.4b1) on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mpi import * >>> if rank == 0: ... send("hello",1) ... else: ... print recv() ... ('hello', <MPI_Status Source="0" Tag="0" Error="0" MPI: no errors\>) >>> The good news is that students (after using only pyMPI) were able to transfer all their knowledge back to C and FORTRAN parallel programming. Cheers! Pat --------- %< -------------------------------------------------- The basic requirements for pyMPI are (not surprisingly ;-) ), Python and a MPI C compiler (or appropriate includes and libraries). I think there is a debian package for LAM MPI (see http://packages.debian.org/testing/source/lam ) This installs several things, but the only ones you likely need to know will be lamboot, mpirun, mpicc. Now make sure the Python was installed with development options: You can run this command to find out python's install root... % python -c "from distutils.sysconfig import parse_makefile, get_makefile_filename; print parse_makefile ( get_makefile_filename ( ) ) ['prefix']" /usr/local Now look to see if the install things are there... (the below assumes python2.4, the path is similar for 2.1, 2.2, 2.3). % ls /usr/local/include/python2.4/Python.h /usr/local/includ I've used pyMPI in a classrooom environment.... I think its a great way to learn about MPI without the hassle of doing it with C or Fortran. I was able to concentrate on the parallelism instead of thee/python2.4/Python.h % ls /usr/local/lib/python2.4/config Makefile Setup.config config.c install-sh* makesetup* Setup Setup.local config.c.in libpython2.4.a python.o If you don't see those files, then you may need to build Python from scratch. Now to build a version of pyMPI that uses those tools.... First, use which to make sure you are using. % which mpicc /usr/local/bin/mpicc % which python /usr/local/bin/python % cd pyMPI-2.4b1 % env CC=/usr/local/bin/mpicc ./configure --prefix=/where/ever --with-python=/usr/local/bin/python % make % make install % make check using the env CC=xyz ensures that the configure will pick the parallel mpi compiler you want [similarly the --with-python]. If you don't pick a installation prefix, it is supposed to default to the same place python is installed. If you download the latest and greatest directly from CVS, you need to do % ./boot before you do the % ./configure [this bootstraps the autoconf environment]. ---------%<---------------------------------- To run, you should do % lamboot % mpirun -np 2 pyMPI myscript.py if you want to run across a cluster, you set up a machines file in bhost format (do a % man bhost for info) % cat machines tux132 tux147 You also need to either have to be running rsh or you must set up ssh keys for passwordless login. % ssh-keygen -t dsa <blah blah blah> % cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys If you have any problems, let me know. Cheers, Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller In order that people may be happy in their work, these three things are needed: they must be fit for it; they must not do too much of it; and they must have a sense of success in it. -- John Ruskin |
From: Josef B. <jos...@wa...> - 2005-06-10 16:06:33
|
I have a cluster of machines running debian. I am not the system admin but he'll help with whatever I want to do. I want to run pympi for a course in September. My question is: "What needs to be installed and in what order?". I assume we need to install some kind of MPI system first then pympi. Any help would be greatly appreciated. Josef M Breutzmann * * * * * * * * * * * * * * * * * * * * * * * * * * * * * All the people I used to know are an illusion to me now, * some are mathematicians, some are carpenter's wives, * don't know how it all got started, * don't know what they're doin' with their lives -- Dylan * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Dr. Josef M. Breutzmann * Wartburg College CompSci/Math * jos...@wa... * http://mcsp.wartburg.edu/breutzma * * * * * * * * * * * * * * * * * * * * * * * * * * * * |
From: Mike S. <st...@gm...> - 2005-04-05 21:00:26
|
Hello All, I'm wondering if pyMPI has been successfully tested running with other parallel executables. Currently I've been trying to run a compiled fortran parallel program concurrently with a python program and have been unable to get both programs to initialize. When I start multiple python executables everything appears to run fine. However if I run python with the fortran program I only see startup and initialization info from the fortran program. I'm probably missing some setup step. Has anyone got pyMPI working in a multi-executable mpi system? (i.e.: mpi can start multiple executables each with it's own pool of processors). Thanks for any info, -Mike |
From: Pat M. <pat...@ll...> - 2005-03-07 16:11:32
|
The softload stuff is not built by default because it is hard to make it work uniformily across OS's and MPI architectures. So, if you want it, you have to build it in a separate step. I don't consider this feature "working" as yet. I need to update the pyMPI_linker to fix some compiler issues. In theory, it should work like this. % python softload_setup.py install --install-lib=. make all-am building 'mpi' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -I/home/pjmiller/pub/Linux/include/python2.4 -c pyMPI_softload.c -o build/temp.linux-i686-2.4/pyMPI_softload.o gcc -pthread -shared build/temp.linux-i686-2.4/pyMPI_softload.o -L/home/pjmiller/sourceforge/pyMPI -lpyMPI -o build/lib.linux-i686-2.4/mpi.so running install_lib copying build/lib.linux-i686-2.4/mpi.so -> . % % # Invoke normal, sequential python % python Python 2.4 (#1, Dec 1 2004, 08:43:03) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mpi >>> mpi.size 1 >>> % mpirun -np 3 python -c 'import mpi; print mpi.rank' 0 1 2 % -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Access to power must be confined to those who are not in love with it. -- Plato |
From: Julian C. <rjc...@cs...> - 2005-03-05 16:51:24
|
Can someone explain what this file does. It looks like it's related to the soft load feature i.e. starting python instead or pympi. But then there is also pyMPI_softload.c. Can someone explain how this feature works and how to build it, since it did not get created during the build process. tks Julian Cook |
From: Julian C. <rjc...@cs...> - 2005-03-05 14:05:10
|
The unwanted extra new line is now gone. I was unable to get all the latest cvs updates (we don t have cvs installed on the Solaris machine). However, hand patching all the new /pympi files from sourceforge and recompiling solved the problem. The fix appeared to be in pympi_macros.h according to the comments. |
From: Julian C. <rjc...@cs...> - 2005-02-19 14:28:10
|
Has anyone traced the cause or otherwise found a fix for the unwanted New-Line (or linefeed) at the >>> interactive prompt? When you are in interactive mode, for lengthy periods it eats up screen at 2x the rate, because of the extra line feed. regards Julian |
From: Mike S. <st...@gm...> - 2005-02-03 05:48:17
|
> Alas, more underdocumented features of pyMPI come to light. > > That functionality exists in the mpi module as the "communicator" > type object. This is the same object used to create and > expose the internal MPI communicators. Well, it's a good feature even if is undocumented. It's a lot nicer to use something in pyMPI built for that specific purpose then whatever unholy hack I was going to try next. I guess documentation for pyMPI needs some work. Is there source for that pyMPI pdf? We could try to start renovating it a bit. Just wondering if there is a kernel of docs to try to add to. Thanks a million Pat! ~Mike |
From: Pat M. <pat...@ll...> - 2005-02-03 00:25:30
|
> However, the values that are supposed to be communicators show up as > python integers. While these may be completely usable for passing to > extension codes they are worthless for actually doing anything in > python. Alas, more underdocumented features of pyMPI come to light. That functionality exists in the mpi module as the "communicator" type object. This is the same object used to create and expose the internal MPI communicators. >>> h = myfortran.getHandle() >>> print h 141259752 >>> c = mpi.communicator(h) <communicator object at 0xb7532500> >>> c.rank 14 Give it a bogus handle and you may segfault in LAM or get an "Invalid communicator" exception in MPICH. * * * * * For a bit more info, see the help file [below]. The idea of persistence of communicators is just to help Python decide whether to delete the MPI communicator when the Python communicator is freed. The default is to NOT delete the communicator (that is, do NOT call MPI_Comm_free() on it). >>> import mpi >>> help(mpi.communicator) Help on class communicator in module __builtin__: class communicator(object) | Create communicator object from communicator or handle | | communicator(communicator=COMM_NULL, # Communicator to wrap | persistent=1) # If false, release MPI comm | --> <communicator instance> | | Build instance of a communicator interface. The persistent flag (by | default on) means that Python WILL NOT release the MPI communicator on | delete. | | >>> null = communicator() # returns a (not the) NULL communicator | >>> c = communicator(WORLD) # a new interface to WORLD communicator | >>> my_world = communicator(handle,0) # Python version of handle | # MPI_Comm_free() will be called ... -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller What hunger is in relation to food, zest is in relation to life. -- Bertrand Russell, philosopher, mathematician, and author (1872-1970) |
From: Mike S. <st...@gm...> - 2005-02-02 22:32:06
|
Hi All, I think this is a question that Pat will have to answer, but perhaps some of you have seen it or have the same question: I have a fortran module wrapped using f2py. The module in this case is MPH, the Multi Program Handshaking utility. It is mostly used to help setup the MPI environment for multiple mpi executables. I need to call some of these library routines, some of which are supposed to return communicators. However, the values that are supposed to be communicators show up as python integers. While these may be completely usable for passing to extension codes they are worthless for actually doing anything in python. I've noticed that pyMPI is capable of handling the opposite though, if I pass a pyMPI communicator object to wrapped code expecting a fortran communicator the wrapped code still works. Is there any way for me to take a communicator from fortran and "promote" it to a python communicator so that I can get information like the size of the communicator? If there is no pyMPI way to do it at the moment, how else could I get this information? Write wrappers around a few MPI functions like MPI_Comm_rank and MPI_Comm_size in fortran or c? Then just call those with the fortran communicator to get the info i need? Thanks in advance, ~Mike |
From: Mike S. <st...@gm...> - 2005-01-29 09:05:52
|
Great posts Julian! Wish I could have replied to your questions before you did. :-) I have to second your ideas for future pyMPI plans. A stable release should be first priority. Documentation would also be great. In fact, we've got the begiinnings of a simple webpage now, and it would be excellent to begin posting additional pyMPI examples there. If you, or anyone else, has code that they would like to submit as examples of pyMPI usage, just e-mail them to me and I will process and post them on the site. ~Mike On Fri, 28 Jan 2005 20:18:05 -0800, pym...@li... <pym...@li...> wrote: > Send Pympi-users mailing list submissions to > pym...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/pympi-users > or, via email, send a message with subject or body 'help' to > pym...@li... > > You can reach the person managing the list at > pym...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pympi-users digest..." > > Today's Topics: > > 1. pyMPI tested with python 2.4 on Sun (Julian Cook) > 2. What I like about pyMPI is (Julian Cook) > 3. Correct syntax for mpi.gather to concatenate items into a global list? (Julian Cook) > 4. Fixed gather syntax where pyMPI hangs (Julian Cook) > > --__--__-- > > Message: 1 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 12:24:59 -0500 > Subject: [Pympi-users] pyMPI tested with python 2.4 on Sun > > Long post regarding building pyMPI on solaris. Last week Pat Miller (via > email) suggested aiming for a stable release of pyMPI to coincide with > python 2.4. This would allow new users to install a version known to work > with 2.4 across a variety of platforms. To this end I decided to test pyMPI > against 2.4 on Solaris. > > Here's the summary in case anyone is interested: > > This is for building pyMPI 2.1 with python 2.4 (30Nov04) on Solaris 2.8. > Everything was installed into my home directories, not to /usr/bin etc. > 1. Building Python 2.4 was easily the hardest part. I could not get > configure, make to run with gcc, so used the Sun CC compiler instead. I've > had zero problems with gcc and previous versions of python on Sun( except > for TCL/TK which gets me every time). If you have similar problems try > ./configure --without-gcc. Also be aware that you need to separately change > the linker command as well to cc, otherwise it will attempt to use g++. > 2. I used mpich 1.2.6 for the mpi part. This built very easily, with no > problems that I remember. I also used the Sun CC compiler for this. I tested > it with "mpirun -np 4 cpi" (cpi is in examples/basic) > 3. I downloaded pyMPI2.1b4 and also configured it using > ./configure --without-gcc. I had some problems with the pyMPI configure as > follows: > a) I had not done make install for python, so the python-2.4/lib dir did not > exist (used in $PYLIBDIR for checking if site-packages exists). Fixed after > installing python properly. > b) pyMPI configure expected to find config.c in the main python-2.4 dir. The > python configure script actually moved this file to the Modules/ dir. Had to > copy config.c back from Modules/ into main python dir > c) Had to move pyconfig.c into Include/, because that's where python.h etc > needed it for compilation during the configure test of h files. > The actual error from config.log for the record is: > > #include <Python.h> > configure:5259: result: no > configure:5263: checking Python.h presence > configure:5270: > /home/jcook/mpich/mpich-1.2.6/bin/mpicc -E -w -I/home/jcook/python/Python-2. > 4/Include conftest.c > "/home/jcook/python/Python-2.4/Include/Python.h", line 8: cannot find > include file: "pyconfig.h" > "/home/jcook/python/Python-2.4/Include/pyport.h", line 4: cannot find > include file: "pyconfig.h" > > 4. The actual make was error free. The only step that appears to be missing > is the setup.py step i.e. "python setup.py build" . I could not see the > setup.py file anywhere? There is a softload_setup.py file, but I don t know > what it does. > > 5. I tested it on a 4 cpu server. mpi had autogenerated a machines.solaris > file for me. Everything ran fine for non-interactive tests e.g. > > ale{jcook}60% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI pyPI.py > Try computing with 1 rectangles CPU's: 4 > 0.0 > 0.0 > 0.0 > Error is 3.2 > Try computing with 2 rectangles CPU's: 4 > ... etc etc > > 6. To run interactive, you have to configure pyMPI using > "./configure --without-gcc --with-isatty". This will allow you to get to the > python prompt. Notice in this release that you get an unintended extra new > line, which is a known bug: > > ale{jcook}66% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > >>> > import mpi > >>> > print mpi.rank,mpi.size > 0 4 > 1 4 > 3 4 > 2 4 > >>> > The only irritating issue here is the 4x credits, which I wasn't expecting. > This would obviously be a larger problem when you go to 50 nodes. If anyone > knows a fix for this let me know. > Julian Cook > > --__--__-- > > Message: 2 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 12:55:59 -0500 > Subject: [Pympi-users] What I like about pyMPI is > > [Pat Millers'] current priority list is: > > 1) Direct [non-pickled] communication for Numarray and Numeric arrays > 2) A "remote-communicating-objects" layer [one-sided, interrupting > remote method calls] > 3) SIMD support > --------------- > > I already sent this reply to Pat at the admin list. For the record, what I > like about pyMPI is: > > 1. The close correlation between the MPI calls and the python equivalents. > It's not particularly oo or pythonic, but it makes life simple. It means you > don't have to re-learn the api when switching from C MPI to pyMPI. This > would be a big selling point to a current C MPI user. You just have to point > out how much faster they can prototype with pyMPI. > 2. The interactive ability is great. You never get the code right first > time, so being able to see, investigate and correct the intermediate > functions, as you go, is very productive. Also research is exploratory by > nature. You get an answer, think about it, then try something else. > 3. General ease of use. I think if people saw how easy it is to go parallel > in pyMPI, it would be a big hit. > > In terms of priorities, I don't disagree with the above (Pat Millers') > priority list, since they will probably increase throughput substantially. > What I think is would help in the short term is: > 1. A release that can be considered the current stable release. This is > important, because that's the one that new users should get pointed to. This > might be difficult since it involves maintaining 2 branches of the code. (he > did say that a stable release is agood idea, to match python 2.4) > 2. A documentation effort over the next 3-6 months. We could all help here > by coming up with examples from our own fields. I actually work in Finance, > so the examples I come up with, would likely be very different from those of > other current users. I noticed that mocapy exists. That should be > highlighted too. > 3. Get everyone to write in and talk about what what they are doing. Not > only is it interesting, but a vibrant user community attracts new users like > a magnet. > > I actually had pyMPI running at home, but I recently moved house, so I need > to set up my network again. I was on version 2.1b4. I'm getting pyMPI > properly installed at work (see other post), where we have 100+ > sparcstations and a bunch of servers. We also use [in-house developed] > parallel processing at work in our product for speeding up scenario analysis > of derivative trades. > > regards > > Julian Cook > > --__--__-- > > Message: 3 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 17:26:02 -0500 > Subject: [Pympi-users] Correct syntax for mpi.gather to concatenate items into a global list? > > Can someone tell me what is wrong with the following short script. I'm > trying to gather an item from a local list in each node into the global > list. > > >>> import mpi,os > >>> machine = os.uname() > >>> machines[1],mpi.rank > ('ale', 0) > ('ale', 1) > ('ale', 2) > ('ale', 3) > # machine[1] is the node name > >>> if mpi.rank ==0: > ... gr = mpi.gather(machines[1])# just get the node name > > # at this point pyMPI hangs. I ve tried declaring gr as gr = [] everwhere, > just in the master and also tried not declaring it. pyMPI always hangs. > Usually you cannot assign items to a list, you have to append them, but > possible gather does that under the hood anyway.. > > --__--__-- > > Message: 4 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 17:50:12 -0500 > Subject: [Pympi-users] Fixed gather syntax where pyMPI hangs > > ------=_NextPart_000_0016_01C50561.D1099580 > Content-Type: text/plain; > charset="iso-8859-1" > Content-Transfer-Encoding: 7bit > > I fixed the previous post by removing if mpi.rank == 0:, I was thinking that > the global (gathering) code should only be executed by the master. This code > works (using 2 instances of pyMPI on one server): > > >>> import mpi,os > >>> machine = os.uname() > >>> machine_name = [] > >>> machine_name.append(machine[1]) > # now gather > >>> global_name = mpi.gather(machine_name) > ['ale'] > >>> global_name > [ 'ale' , 'ale' ] > > The only odd part is that you get output from ">>> global_name = > mpi.gather(machine_name)", which is not really correct.. > > Julian Cook > > ------=_NextPart_000_0016_01C50561.D1099580 > Content-Type: text/html; > charset="iso-8859-1" > Content-Transfer-Encoding: quoted-printable > > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> > <HTML><HEAD> > <META http-equiv=3DContent-Type content=3D"text/html; charset=3Diso-8859-1"> > <META content=3D"MSHTML 6.00.2800.1458" name=3DGENERATOR></HEAD> > <BODY> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>I fixed th= > e previous=20 > post by removing if mpi.rank =3D=3D 0:, I was thinking that the global (gath= > ering)=20 > code should only be executed by the master. This code works (using 2=20 > instances of pyMPI on one server):</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005></SPAN><= > /FONT> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>>>&g= > t; import=20 > mpi,os</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>machine =3D=20 > os.uname()</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>machine_name =3D=20 > []</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>>=20 > </SPAN>machine_name.append(machine[1])</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005># now=20 > gather</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>global_name =3D=20 > mpi.gather(machine_name)</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>['ale']<= > /SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> global_name </SPAN></SPAN></FONT></DIV= > > > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>[ 'ale' , 'ale'=20 > ]</SPAN></FONT></SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005></SPAN></FONT></SPAN></FON= > T> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>The only odd part is that=20= > you get output=20 > from ">>> global_name =3D mpi.gather(machine_name)", which is not r= > eally=20 > correct..</SPAN></FONT></SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005></SPAN></FONT></SPAN></FON= > T> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>Julian=20 > Cook</SPAN></FONT></DIV></SPAN></FONT> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005></SPAN><= > /FONT> </DIV></BODY></HTML> > > ------=_NextPart_000_0016_01C50561.D1099580-- > > --__--__-- > > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > End of Pympi-users Digest > |
From: Julian C. <rjc...@cs...> - 2005-01-28 22:50:29
|
I fixed the previous post by removing if mpi.rank == 0:, I was thinking that the global (gathering) code should only be executed by the master. This code works (using 2 instances of pyMPI on one server): >>> import mpi,os >>> machine = os.uname() >>> machine_name = [] >>> machine_name.append(machine[1]) # now gather >>> global_name = mpi.gather(machine_name) ['ale'] >>> global_name [ 'ale' , 'ale' ] The only odd part is that you get output from ">>> global_name = mpi.gather(machine_name)", which is not really correct.. Julian Cook |
From: Julian C. <rjc...@cs...> - 2005-01-28 22:26:20
|
Can someone tell me what is wrong with the following short script. I'm trying to gather an item from a local list in each node into the global list. >>> import mpi,os >>> machine = os.uname() >>> machines[1],mpi.rank ('ale', 0) ('ale', 1) ('ale', 2) ('ale', 3) # machine[1] is the node name >>> if mpi.rank ==0: ... gr = mpi.gather(machines[1])# just get the node name # at this point pyMPI hangs. I ve tried declaring gr as gr = [] everwhere, just in the master and also tried not declaring it. pyMPI always hangs. Usually you cannot assign items to a list, you have to append them, but possible gather does that under the hood anyway.. |
From: Julian C. <rjc...@cs...> - 2005-01-28 20:18:31
|
[Pat Millers'] current priority list is: 1) Direct [non-pickled] communication for Numarray and Numeric arrays 2) A "remote-communicating-objects" layer [one-sided, interrupting remote method calls] 3) SIMD support --------------- I already sent this reply to Pat at the admin list. For the record, what I like about pyMPI is: 1. The close correlation between the MPI calls and the python equivalents. It's not particularly oo or pythonic, but it makes life simple. It means you don't have to re-learn the api when switching from C MPI to pyMPI. This would be a big selling point to a current C MPI user. You just have to point out how much faster they can prototype with pyMPI. 2. The interactive ability is great. You never get the code right first time, so being able to see, investigate and correct the intermediate functions, as you go, is very productive. Also research is exploratory by nature. You get an answer, think about it, then try something else. 3. General ease of use. I think if people saw how easy it is to go parallel in pyMPI, it would be a big hit. In terms of priorities, I don't disagree with the above (Pat Millers') priority list, since they will probably increase throughput substantially. What I think is would help in the short term is: 1. A release that can be considered the current stable release. This is important, because that's the one that new users should get pointed to. This might be difficult since it involves maintaining 2 branches of the code. (he did say that a stable release is agood idea, to match python 2.4) 2. A documentation effort over the next 3-6 months. We could all help here by coming up with examples from our own fields. I actually work in Finance, so the examples I come up with, would likely be very different from those of other current users. I noticed that mocapy exists. That should be highlighted too. 3. Get everyone to write in and talk about what what they are doing. Not only is it interesting, but a vibrant user community attracts new users like a magnet. I actually had pyMPI running at home, but I recently moved house, so I need to set up my network again. I was on version 2.1b4. I'm getting pyMPI properly installed at work (see other post), where we have 100+ sparcstations and a bunch of servers. We also use [in-house developed] parallel processing at work in our product for speeding up scenario analysis of derivative trades. regards Julian Cook |
From: Julian C. <rjc...@cs...> - 2005-01-28 18:25:26
|
Long post regarding building pyMPI on solaris. Last week Pat Miller (via email) suggested aiming for a stable release of pyMPI to coincide with python 2.4. This would allow new users to install a version known to work with 2.4 across a variety of platforms. To this end I decided to test pyMPI against 2.4 on Solaris. Here's the summary in case anyone is interested: This is for building pyMPI 2.1 with python 2.4 (30Nov04) on Solaris 2.8. Everything was installed into my home directories, not to /usr/bin etc. 1. Building Python 2.4 was easily the hardest part. I could not get configure, make to run with gcc, so used the Sun CC compiler instead. I've had zero problems with gcc and previous versions of python on Sun( except for TCL/TK which gets me every time). If you have similar problems try ./configure --without-gcc. Also be aware that you need to separately change the linker command as well to cc, otherwise it will attempt to use g++. 2. I used mpich 1.2.6 for the mpi part. This built very easily, with no problems that I remember. I also used the Sun CC compiler for this. I tested it with "mpirun -np 4 cpi" (cpi is in examples/basic) 3. I downloaded pyMPI2.1b4 and also configured it using ./configure --without-gcc. I had some problems with the pyMPI configure as follows: a) I had not done make install for python, so the python-2.4/lib dir did not exist (used in $PYLIBDIR for checking if site-packages exists). Fixed after installing python properly. b) pyMPI configure expected to find config.c in the main python-2.4 dir. The python configure script actually moved this file to the Modules/ dir. Had to copy config.c back from Modules/ into main python dir c) Had to move pyconfig.c into Include/, because that's where python.h etc needed it for compilation during the configure test of h files. The actual error from config.log for the record is: #include <Python.h> configure:5259: result: no configure:5263: checking Python.h presence configure:5270: /home/jcook/mpich/mpich-1.2.6/bin/mpicc -E -w -I/home/jcook/python/Python-2. 4/Include conftest.c "/home/jcook/python/Python-2.4/Include/Python.h", line 8: cannot find include file: "pyconfig.h" "/home/jcook/python/Python-2.4/Include/pyport.h", line 4: cannot find include file: "pyconfig.h" 4. The actual make was error free. The only step that appears to be missing is the setup.py step i.e. "python setup.py build" . I could not see the setup.py file anywhere? There is a softload_setup.py file, but I don t know what it does. 5. I tested it on a 4 cpu server. mpi had autogenerated a machines.solaris file for me. Everything ran fine for non-interactive tests e.g. ale{jcook}60% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI pyPI.py Try computing with 1 rectangles CPU's: 4 0.0 0.0 0.0 Error is 3.2 Try computing with 2 rectangles CPU's: 4 ... etc etc 6. To run interactive, you have to configure pyMPI using "./configure --without-gcc --with-isatty". This will allow you to get to the python prompt. Notice in this release that you get an unintended extra new line, which is a known bug: ale{jcook}66% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI Python 2.4 (pyMPI 2.1b4) on sunos5 Type "help", "copyright", "credits" or "license" for more information. Python 2.4 (pyMPI 2.1b4) on sunos5 Type "help", "copyright", "credits" or "license" for more information. Python 2.4 (pyMPI 2.1b4) on sunos5 Type "help", "copyright", "credits" or "license" for more information. Python 2.4 (pyMPI 2.1b4) on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> import mpi >>> print mpi.rank,mpi.size 0 4 1 4 3 4 2 4 >>> The only irritating issue here is the 4x credits, which I wasn't expecting. This would obviously be a larger problem when you go to 50 nodes. If anyone knows a fix for this let me know. Julian Cook |
From: Pat M. <pat...@ll...> - 2005-01-24 16:33:51
|
Peter Maxwell wrote: > I trust you'll get replies from real PyMPI users too. All I can do is list the reasons why I wrote PyxMPI instead. I wanted: Thanks for the feedback! > - A simple OO interface: communicator.split(a); communicator.sum(A). While pyMPI's is OO under the hood, the target was C/FORTRAN guys, so OO gets second shrift to the procedural interface. Its nice to see a purer one written from scratch. > - Summing over a distributed Numeric array without pickling. I concentrated on portability, but this is too important to leave out much longer. My internal customer did very little of this, so it wasn't important enough to put in up front. My external audience is interested though ;-) > - An unpatched python. Maybe if I was using MPICH rather than LAM I would have > accepted that it isn't possible in general, but I was used to PyPar, which works fine > (on LAM anyway) without patching the interpreter. I've put in an experimental "mpi.so", but for full portability I will need to support the external binary. One of our large systems refuses to do an MPI launch on code that is not hard-linked against the communications library [sigh]. > - Preferably a GPL compatible license - your "Notification of Commercial Use" > clause seems to rule that out. Well, I was happy to get any Open Source license out of my employer at the time. We are now doing some LGPL releases, but someone had to pave the way for that. I'm glad there are completely unencumbered tools out there though I don't think anybody has "notified" the lab about comercial use of pyMPI. > excuse to see what I could do with Pyrex, I've been wanting to play with pyrex for a while... I envy you! > PyxMPI mow meets the needs of the application for which it was written For a long time pyMPI was frozen for exactly the same reason. It did everything it needed to do and little more. Then I noticed how many people were downloading it! As now I am only peripherally supporting that original project, I am freer to make the tool more general. Cheers and thanks! Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller What hunger is in relation to food, zest is in relation to life. -- Bertrand Russell, philosopher, mathematician, and author (1872-1970) |
From: Pat M. <pat...@ll...> - 2005-01-20 23:42:54
|
Peter wrote: > pythonic API (everything is done via methods of communicator objects) Just a note that pyMPI does it that way too. I "hoist" methods off of the WORLD communicator up and shove them in to the module to look like functions. This is done to ease the transition of C and FORTRAN programmers [who already know MPI]. Then the code looks a lot like procedural MPI e.g. MPI_Bcast(....) | n = mpi.bcast(local) MPI_Barrier(MPI_COMM_WORLD) | mpi.barrier() MPI_Allreduce(... MPI_SUM) | all = mpi.allreduce(local,mpi.SUM) If you like a more objecty model, you can use the communicators and their methods. E.g. # Split into two teams red = mpi.WORLD.split(range(0,mpi.size,2)) black = mpi.WORLD.split(range(1,mpi.size,2)) if red is not None: # Must be RED r = red.bcast(r0) else: # Must be black! b = black.bcast(b1) mpi.WORLD.barrier() * * * * * * * * On related business... What features do you think I should be working on improving as the Python MPI field becomes more competitive? My current priority list is: 1) Direct [non-pickled] communication for Numarray and Numeric arrays 2) A "remote-communicating-objects" layer [one-sided, interrupting remote method calls] 3) SIMD support Cheers, Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller What hunger is in relation to food, zest is in relation to life. -- Bertrand Russell, philosopher, mathematician, and author (1872-1970) |
From: Peter M. <pet...@an...> - 2005-01-20 23:22:31
|
Here's yet another: PyxMPI at http://cbis.anu.edu.au/software/cogent/PyxMPI-0.6.tar.gz is in some ways less complete than PyMPI or mpi4py but it does have a couple of virtues: a pythonic API (everything is done via methods of communicator objects) and compact source code - 800 lines of Pyrex. It will move pickled objects point-to-point but it's aimed primarily at operations on Numeric arrays. |