Thread: [Pympi-users] Re: Pympi-users digest, Vol 1 #15 - 4 msgs
Status: Alpha
Brought to you by:
patmiller
From: Mike S. <st...@gm...> - 2005-01-29 09:05:52
|
Great posts Julian! Wish I could have replied to your questions before you did. :-) I have to second your ideas for future pyMPI plans. A stable release should be first priority. Documentation would also be great. In fact, we've got the begiinnings of a simple webpage now, and it would be excellent to begin posting additional pyMPI examples there. If you, or anyone else, has code that they would like to submit as examples of pyMPI usage, just e-mail them to me and I will process and post them on the site. ~Mike On Fri, 28 Jan 2005 20:18:05 -0800, pym...@li... <pym...@li...> wrote: > Send Pympi-users mailing list submissions to > pym...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/pympi-users > or, via email, send a message with subject or body 'help' to > pym...@li... > > You can reach the person managing the list at > pym...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pympi-users digest..." > > Today's Topics: > > 1. pyMPI tested with python 2.4 on Sun (Julian Cook) > 2. What I like about pyMPI is (Julian Cook) > 3. Correct syntax for mpi.gather to concatenate items into a global list? (Julian Cook) > 4. Fixed gather syntax where pyMPI hangs (Julian Cook) > > --__--__-- > > Message: 1 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 12:24:59 -0500 > Subject: [Pympi-users] pyMPI tested with python 2.4 on Sun > > Long post regarding building pyMPI on solaris. Last week Pat Miller (via > email) suggested aiming for a stable release of pyMPI to coincide with > python 2.4. This would allow new users to install a version known to work > with 2.4 across a variety of platforms. To this end I decided to test pyMPI > against 2.4 on Solaris. > > Here's the summary in case anyone is interested: > > This is for building pyMPI 2.1 with python 2.4 (30Nov04) on Solaris 2.8. > Everything was installed into my home directories, not to /usr/bin etc. > 1. Building Python 2.4 was easily the hardest part. I could not get > configure, make to run with gcc, so used the Sun CC compiler instead. I've > had zero problems with gcc and previous versions of python on Sun( except > for TCL/TK which gets me every time). If you have similar problems try > ./configure --without-gcc. Also be aware that you need to separately change > the linker command as well to cc, otherwise it will attempt to use g++. > 2. I used mpich 1.2.6 for the mpi part. This built very easily, with no > problems that I remember. I also used the Sun CC compiler for this. I tested > it with "mpirun -np 4 cpi" (cpi is in examples/basic) > 3. I downloaded pyMPI2.1b4 and also configured it using > ./configure --without-gcc. I had some problems with the pyMPI configure as > follows: > a) I had not done make install for python, so the python-2.4/lib dir did not > exist (used in $PYLIBDIR for checking if site-packages exists). Fixed after > installing python properly. > b) pyMPI configure expected to find config.c in the main python-2.4 dir. The > python configure script actually moved this file to the Modules/ dir. Had to > copy config.c back from Modules/ into main python dir > c) Had to move pyconfig.c into Include/, because that's where python.h etc > needed it for compilation during the configure test of h files. > The actual error from config.log for the record is: > > #include <Python.h> > configure:5259: result: no > configure:5263: checking Python.h presence > configure:5270: > /home/jcook/mpich/mpich-1.2.6/bin/mpicc -E -w -I/home/jcook/python/Python-2. > 4/Include conftest.c > "/home/jcook/python/Python-2.4/Include/Python.h", line 8: cannot find > include file: "pyconfig.h" > "/home/jcook/python/Python-2.4/Include/pyport.h", line 4: cannot find > include file: "pyconfig.h" > > 4. The actual make was error free. The only step that appears to be missing > is the setup.py step i.e. "python setup.py build" . I could not see the > setup.py file anywhere? There is a softload_setup.py file, but I don t know > what it does. > > 5. I tested it on a 4 cpu server. mpi had autogenerated a machines.solaris > file for me. Everything ran fine for non-interactive tests e.g. > > ale{jcook}60% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI pyPI.py > Try computing with 1 rectangles CPU's: 4 > 0.0 > 0.0 > 0.0 > Error is 3.2 > Try computing with 2 rectangles CPU's: 4 > ... etc etc > > 6. To run interactive, you have to configure pyMPI using > "./configure --without-gcc --with-isatty". This will allow you to get to the > python prompt. Notice in this release that you get an unintended extra new > line, which is a known bug: > > ale{jcook}66% mpirun -np 4 /home/jcook/python/Python-2.4/bin/pyMPI > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > Python 2.4 (pyMPI 2.1b4) on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > >>> > import mpi > >>> > print mpi.rank,mpi.size > 0 4 > 1 4 > 3 4 > 2 4 > >>> > The only irritating issue here is the 4x credits, which I wasn't expecting. > This would obviously be a larger problem when you go to 50 nodes. If anyone > knows a fix for this let me know. > Julian Cook > > --__--__-- > > Message: 2 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 12:55:59 -0500 > Subject: [Pympi-users] What I like about pyMPI is > > [Pat Millers'] current priority list is: > > 1) Direct [non-pickled] communication for Numarray and Numeric arrays > 2) A "remote-communicating-objects" layer [one-sided, interrupting > remote method calls] > 3) SIMD support > --------------- > > I already sent this reply to Pat at the admin list. For the record, what I > like about pyMPI is: > > 1. The close correlation between the MPI calls and the python equivalents. > It's not particularly oo or pythonic, but it makes life simple. It means you > don't have to re-learn the api when switching from C MPI to pyMPI. This > would be a big selling point to a current C MPI user. You just have to point > out how much faster they can prototype with pyMPI. > 2. The interactive ability is great. You never get the code right first > time, so being able to see, investigate and correct the intermediate > functions, as you go, is very productive. Also research is exploratory by > nature. You get an answer, think about it, then try something else. > 3. General ease of use. I think if people saw how easy it is to go parallel > in pyMPI, it would be a big hit. > > In terms of priorities, I don't disagree with the above (Pat Millers') > priority list, since they will probably increase throughput substantially. > What I think is would help in the short term is: > 1. A release that can be considered the current stable release. This is > important, because that's the one that new users should get pointed to. This > might be difficult since it involves maintaining 2 branches of the code. (he > did say that a stable release is agood idea, to match python 2.4) > 2. A documentation effort over the next 3-6 months. We could all help here > by coming up with examples from our own fields. I actually work in Finance, > so the examples I come up with, would likely be very different from those of > other current users. I noticed that mocapy exists. That should be > highlighted too. > 3. Get everyone to write in and talk about what what they are doing. Not > only is it interesting, but a vibrant user community attracts new users like > a magnet. > > I actually had pyMPI running at home, but I recently moved house, so I need > to set up my network again. I was on version 2.1b4. I'm getting pyMPI > properly installed at work (see other post), where we have 100+ > sparcstations and a bunch of servers. We also use [in-house developed] > parallel processing at work in our product for speeding up scenario analysis > of derivative trades. > > regards > > Julian Cook > > --__--__-- > > Message: 3 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 17:26:02 -0500 > Subject: [Pympi-users] Correct syntax for mpi.gather to concatenate items into a global list? > > Can someone tell me what is wrong with the following short script. I'm > trying to gather an item from a local list in each node into the global > list. > > >>> import mpi,os > >>> machine = os.uname() > >>> machines[1],mpi.rank > ('ale', 0) > ('ale', 1) > ('ale', 2) > ('ale', 3) > # machine[1] is the node name > >>> if mpi.rank ==0: > ... gr = mpi.gather(machines[1])# just get the node name > > # at this point pyMPI hangs. I ve tried declaring gr as gr = [] everwhere, > just in the master and also tried not declaring it. pyMPI always hangs. > Usually you cannot assign items to a list, you have to append them, but > possible gather does that under the hood anyway.. > > --__--__-- > > Message: 4 > Reply-To: <rjc...@cs...> > From: "Julian Cook" <rjc...@cs...> > To: <pym...@li...> > Date: Fri, 28 Jan 2005 17:50:12 -0500 > Subject: [Pympi-users] Fixed gather syntax where pyMPI hangs > > ------=_NextPart_000_0016_01C50561.D1099580 > Content-Type: text/plain; > charset="iso-8859-1" > Content-Transfer-Encoding: 7bit > > I fixed the previous post by removing if mpi.rank == 0:, I was thinking that > the global (gathering) code should only be executed by the master. This code > works (using 2 instances of pyMPI on one server): > > >>> import mpi,os > >>> machine = os.uname() > >>> machine_name = [] > >>> machine_name.append(machine[1]) > # now gather > >>> global_name = mpi.gather(machine_name) > ['ale'] > >>> global_name > [ 'ale' , 'ale' ] > > The only odd part is that you get output from ">>> global_name = > mpi.gather(machine_name)", which is not really correct.. > > Julian Cook > > ------=_NextPart_000_0016_01C50561.D1099580 > Content-Type: text/html; > charset="iso-8859-1" > Content-Transfer-Encoding: quoted-printable > > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> > <HTML><HEAD> > <META http-equiv=3DContent-Type content=3D"text/html; charset=3Diso-8859-1"> > <META content=3D"MSHTML 6.00.2800.1458" name=3DGENERATOR></HEAD> > <BODY> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>I fixed th= > e previous=20 > post by removing if mpi.rank =3D=3D 0:, I was thinking that the global (gath= > ering)=20 > code should only be executed by the master. This code works (using 2=20 > instances of pyMPI on one server):</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005></SPAN><= > /FONT> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>>>&g= > t; import=20 > mpi,os</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>machine =3D=20 > os.uname()</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>machine_name =3D=20 > []</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>>=20 > </SPAN>machine_name.append(machine[1])</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005># now=20 > gather</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> </SPAN>global_name =3D=20 > mpi.gather(machine_name)</SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005>['ale']<= > /SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><SPAN cl= > ass=3D119034022-28012005>>>> global_name </SPAN></SPAN></FONT></DIV= > > > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>[ 'ale' , 'ale'=20 > ]</SPAN></FONT></SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005></SPAN></FONT></SPAN></FON= > T> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>The only odd part is that=20= > you get output=20 > from ">>> global_name =3D mpi.gather(machine_name)", which is not r= > eally=20 > correct..</SPAN></FONT></SPAN></FONT></DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005></SPAN></FONT></SPAN></FON= > T> </DIV> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005><FONT face=3D= > Arial size=3D2><SPAN class=3D119034022-28012005>Julian=20 > Cook</SPAN></FONT></DIV></SPAN></FONT> > <DIV><FONT face=3DArial size=3D2><SPAN class=3D119034022-28012005></SPAN><= > /FONT> </DIV></BODY></HTML> > > ------=_NextPart_000_0016_01C50561.D1099580-- > > --__--__-- > > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > End of Pympi-users Digest > |