pympi-users Mailing List for MPI Python (Page 2)
Status: Alpha
Brought to you by:
patmiller
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(3) |
Dec
(8) |
2005 |
Jan
(9) |
Feb
(4) |
Mar
(3) |
Apr
(1) |
May
|
Jun
(2) |
Jul
(16) |
Aug
(11) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
(7) |
Aug
(2) |
Sep
|
Oct
|
Nov
(7) |
Dec
(4) |
2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
(4) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pat M. <pat...@gm...> - 2006-11-04 01:41:31
|
This one sounds pretty serious... The request object is really in need of some overhaul (it remained basically untouched during the great 2.0 rewrite in 2003). I will see if I can recreate. If not, I can send you a patch that will help shed light on what is happening. Pat On 11/3/06, emi...@en... <emi...@en...> wrote: > I have coded up a fairly simple Manager/Worker style MPI application from > within pyMPI and have been using it for some time now to run some jobs. > > At the core of the Manager process i have in essense > > request = mpi.irecv() > while( there is work to do ) > if ( there is an idle worker ) > mpi.send( job to idle worker ) > if request: > process worker result > if ( more results expected ) > request = mpi.irecv() > > this is certainly a simplified version of the code, but the algorithm and > the calls to mpi.irecv are provided to show where i'm doing this. > > This has run fine without issue for several weeks now. Recently the issue > below has started to crop up. My only guess for the reason is that my > job/result sizes are much larger than they used to be. This might be > increasing the likely hood that the issue will arise (it's not 100% but i > can get it to crash about 1/5 application runs) > > The error occurs below on the line > > if request: > ValueError: Fatal internal unpickling error > > I am concerned that I am not using the mpi module packaged with pyMPI > correctly, should I be using a different algorithm for dispatching the > jobs to worker processes? I'm just not sure what is causing this since I > have made no changes to the Manager/Worker code module I developed and the > only differences is the larger job/result messages (around 500+ characters > now) > > Eamon Millman > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > |
From: <emi...@en...> - 2006-11-03 23:37:28
|
I have coded up a fairly simple Manager/Worker style MPI application from within pyMPI and have been using it for some time now to run some jobs. At the core of the Manager process i have in essense request =3D mpi.irecv() while( there is work to do ) if ( there is an idle worker ) mpi.send( job to idle worker ) if request: process worker result if ( more results expected ) request =3D mpi.irecv() this is certainly a simplified version of the code, but the algorithm and the calls to mpi.irecv are provided to show where i'm doing this. This has run fine without issue for several weeks now. Recently the issue below has started to crop up. My only guess for the reason is that my job/result sizes are much larger than they used to be. This might be increasing the likely hood that the issue will arise (it's not 100% but i can get it to crash about 1/5 application runs) The error occurs below on the line if request: ValueError: Fatal internal unpickling error I am concerned that I am not using the mpi module packaged with pyMPI correctly, should I be using a different algorithm for dispatching the jobs to worker processes? I'm just not sure what is causing this since I have made no changes to the Manager/Worker code module I developed and th= e only differences is the larger job/result messages (around 500+ character= s now) Eamon Millman |
From: David E <3d...@gm...> - 2006-08-28 23:04:34
|
Hi Dave, I'm not sure if this is the problem you have, but here goes: I noticed that ctrl-d kills one of the processes, so if you started 3 processes, after pressing ctrl-d 3 times it should kill all 3 of them. (Last time i used pympi was about 6 months ago so i'm not sure...) Hope this helps, David E. 3d...@gm... On 8/29/06, Dave Grote <dp...@lb...> wrote: > > Hi Pat, > I still use pyMPI and am trying to get it to work on a new cluster of > opterons that my group at LBL recently bought. It uses openmpi with > myrinet. I've used the isatty config option and modified the > pyMPI_isatty.c file to get it to compile. Everything seems to be working > OK (both in batch and interactively) except that control-d doesn't work > for interactive jobs. Is this something you've seen or heard about > before? I didn't find anything like this in the mailing list. Any > suggestions on where to look for the problem? > > BTW - congratulations on your new job in NYC. I hope it is going well > for you. > Dave > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > -- David. |
From: Dave G. <dp...@lb...> - 2006-08-28 22:34:22
|
Hi Pat, I still use pyMPI and am trying to get it to work on a new cluster of opterons that my group at LBL recently bought. It uses openmpi with myrinet. I've used the isatty config option and modified the pyMPI_isatty.c file to get it to compile. Everything seems to be working OK (both in batch and interactively) except that control-d doesn't work for interactive jobs. Is this something you've seen or heard about before? I didn't find anything like this in the mailing list. Any suggestions on where to look for the problem? BTW - congratulations on your new job in NYC. I hope it is going well for you. Dave |
From: Pat M. <pat...@gm...> - 2006-07-18 20:02:24
|
Luigi has identified a real bug... None of my test machines at Livermore noticed it (the isatty workaround isn't needed by any of them), but my new work machine exhibits the same bug. I'll try to get a new version out with the bugfix. Meanwhile, you can just delete the __THROW if it comes up. Pat |
From: Julian C. <jul...@ya...> - 2006-07-18 19:11:53
|
Luigi I have a slightly different version if this file. I pasted the entire file below. The difference appears to be that the _throw portion is defined at the top instead. You can try this version instead. Alternately you could try a more adventurous change and remove the entire HAVE_MPC_ISATTY section entirely - though you would need to understand which #if and #endif lines to remove. Julian #include "mpi.h" #include "Python.h" #include "pyMPI.h" #include "pyMPI_Macros.h" #ifdef HAVE_MPC_ISATTY #include <pm_util.h> #endif #ifndef __THROW #define __THROW #endif START_CPLUSPLUS #ifdef HAVE_MPC_ISATTY /**************************************************************************/ /* GLOBAL ************** isatty ************************/ /**************************************************************************/ /* Replacement for isatty() with correct results under AIX's POE */ /**************************************************************************/ int isatty(int filedes) __THROW { int status; /* ----------------------------------------------- */ /* Do the isatty() work */ /* ----------------------------------------------- */ status = (mpc_isatty(filedes) == 1); return status; } #else #if PYMPI_ISATTY /**************************************************************************/ /* GLOBAL ************** isatty ************************/ /**************************************************************************/ /* Assume stdin,stdout,stderr are attached to a tty */ /**************************************************************************/ int isatty(int filedes) __THROW { return (filedes == 0 || filedes == 1 || filedes == 2); } #endif #endif END_CPLUSPLUS ----- Original Message ---- From: Luigi Paioro <lu...@la...> To: Julian Cook <jul...@ya...> Cc: pym...@li... Sent: Tuesday, July 18, 2006 4:47:55 AM Subject: Re: [Pympi-users] Problem running pyMPI with OpenMPI > 2. It appears that your pympi build will actually run non-interactively, > though I suggest you confirm it by creating a non-trivial script, such > as the pi example and running it as a file: > > $ mpirun -np 3 pyMPI pi_test.py It seems to work: $ mpirun -np 3 pyMPI fractal.py Starting computation (groan) process 1 done with computation!! process 2 done with computation!! process 0 done with computation!! Header length is 54 BMP size is (400, 400) Data length is 480000 Pretty output image! For the time being I can test only one CPU, anyway 3 parallel processes started. > 3. If this runs with good output over all cpu's then the probable cause > is the build, you need to add --isatty to the configure, There is an > example of building on Solaris in the mailing list [2005] that discusses > this. Also there are new line config flags that need to be considered. Well, I've tried with these options: $ CC=mpicc; ./configure -prefix=<inst path> --with-includes=-I<mpi path>/include --with-isatty --with-prompt-nl but I get this error: mpicc -DHAVE_CONFIG_H -I. -I. -I. -I<mpi path>/include -I/usr/include/python2.4 -g -O2 -g -O2 -c `test -f 'pyMPI_isatty.c' || echo './'`pyMPI_isatty.c pyMPI_isatty.c:52: error: syntax error before '{' token make[1]: *** [pyMPI_isatty.o] Error 1 make[1]: Leaving directory `<src path>/pyMPI-2.4b4' make: *** [all] Error 2 > 4. If the above test doesn t work, you need to fall back to testing mpi > itself, using the examples in the mpi installation. pympi is effectively > an mpi program, so mpi itself must work for python to work. MPI itself works! Thank you. Luigi |
From: Pat M. <pat...@gm...> - 2006-07-18 11:36:08
|
Hello all, I've just started my new job in New York City and will shortly be active in the pyMPI world again. To answer Luigi's question, scatter takes any container (supports length and slice) and splits it into nearly equal pieces (the low ranks get extras). Each piece is sent in a single message to the target rank. So, with np=2 A = [11,22,33,44,55] localA = mpi.scatter(A) on rank 0, localA is [11,22,33] on rank 1, localA is [44,55] Notice this works for anything that looks vaguely like a list. So you can scatter a dictionary with D.iteritems() for instance. The localA is always a list, however, the original type is not preserved. Pat |
From: Luigi P. <lu...@la...> - 2006-07-18 09:28:43
|
This is a general question about how Gather and Scatter functions work. I refer to Miller 2002 paper which says that a "simple way to achieve=20 parallelism is with gather/scatter parallelism. A scatter operation will=20 take a container, split it into equal (or nearly equal) parts that are=20 messaged to various slave tasks. A gather reverses that and collects=20 sub-containers together into one larger Python list." This is the example code: import mpi import crypt if mpi.rank =3D=3D 0: words =3D open(=92/usr/dict/words=92).read().split() else: words =3D [] local_words =3D mpi.scatter(words) target =3D =92xxaGcwiAKoYgc=92 for word in local_words: if crypt.crypt(word,target[:2]) =3D=3D target: print =92the word is=92,word break I would like to understand whether a) is the list split into mpi.size groups and then each group sent to a=20 parallel task b) or is each list entry successively sent to the first parallel task fre= e? Example (np=3D2) local_words =3D ['Luigi', 'Dough', 'Patrick', 'Julian', 'James'] a) local_words_0 =3D ['Luigi', 'Dough', 'Patrick'] --> to rank 0 local_words_1 =3D ['Julian', 'James'] --> to rank 1 b) for word in local_words: word --> single list element sent to the first "rank" free Hope I'm clear. Thanks. Luigi |
From: Luigi P. <lu...@la...> - 2006-07-18 08:48:09
|
> 2. It appears that your pympi build will actually run non-interactively, > though I suggest you confirm it by creating a non-trivial script, such > as the pi example and running it as a file: > > $ mpirun -np 3 pyMPI pi_test.py It seems to work: $ mpirun -np 3 pyMPI fractal.py Starting computation (groan) process 1 done with computation!! process 2 done with computation!! process 0 done with computation!! Header length is 54 BMP size is (400, 400) Data length is 480000 Pretty output image! For the time being I can test only one CPU, anyway 3 parallel processes started. > 3. If this runs with good output over all cpu's then the probable cause > is the build, you need to add --isatty to the configure, There is an > example of building on Solaris in the mailing list [2005] that discusses > this. Also there are new line config flags that need to be considered. Well, I've tried with these options: $ CC=mpicc; ./configure -prefix=<inst path> --with-includes=-I<mpi path>/include --with-isatty --with-prompt-nl but I get this error: mpicc -DHAVE_CONFIG_H -I. -I. -I. -I<mpi path>/include -I/usr/include/python2.4 -g -O2 -g -O2 -c `test -f 'pyMPI_isatty.c' || echo './'`pyMPI_isatty.c pyMPI_isatty.c:52: error: syntax error before '{' token make[1]: *** [pyMPI_isatty.o] Error 1 make[1]: Leaving directory `<src path>/pyMPI-2.4b4' make: *** [all] Error 2 > 4. If the above test doesn t work, you need to fall back to testing mpi > itself, using the examples in the mpi installation. pympi is effectively > an mpi program, so mpi itself must work for python to work. MPI itself works! Thank you. Luigi |
From: Julian C. <jul...@ya...> - 2006-07-17 18:24:37
|
Sorry for the delay in replying 1. This has nothing to do with your path. 2. It appears that your pympi build will actually run non-interactively, though I suggest you confirm it by creating a non-trivial script, such as the pi example and running it as a file: $ mpirun -np 3 pyMPI pi_test.py 3. If this runs with good output over all cpu's then the probable cause is the build, you need to add --isatty to the configure, There is an example of building on Solaris in the mailing list [2005] that discusses this. Also there are new line config flags that need to be considered. 4. If the above test doesn t work, you need to fall back to testing mpi itself, using the examples in the mpi installation. pympi is effectively an mpi program, so mpi itself must work for python to work. Julian Cook ----- Original Message ---- From: Luigi Paioro <lu...@la...> To: pym...@li... Sent: Friday, July 7, 2006 5:01:30 AM Subject: [Pympi-users] Problem running pyMPI with OpenMPI Hi! Sorry for boring you, but I have yet another problem with pyMPI and OpenMPI. I was reading your article (P. MILLER, 2002) which is "An introduction to parallel Python" in order to learn how to use pyMPI. Well, just at my first attempt to use it, something doesn't work properly. If I type: $ mpirun -np 3 pyMPI the >>> prompt doesn't appears, and if I type some commands, they are execute (only one time) only when I digit Ctrl+d to exit from the Python shell. Here my output: $ mpirun -np 3 pyMPI print "Hello!" (Ctrl+d) Hello! $ So, I thought that was a problem of my PYTHONPATH, which hasn't defined how to reach the mpi module (mpi.py). Well, I've searched for it in my installation directory, but I haven't found it! Can you suggest me some tests or something else in order to understand what's the trouble (and solve it)? Thank you! Luigi Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users |
From: Luigi P. <lu...@la...> - 2006-07-07 09:01:52
|
Hi! Sorry for boring you, but I have yet another problem with pyMPI and OpenMPI. I was reading your article (P. MILLER, 2002) which is "An introduction to parallel Python" in order to learn how to use pyMPI. Well, just at my first attempt to use it, something doesn't work properly. If I type: $ mpirun -np 3 pyMPI the >>> prompt doesn't appears, and if I type some commands, they are execute (only one time) only when I digit Ctrl+d to exit from the Python shell. Here my output: $ mpirun -np 3 pyMPI print "Hello!" (Ctrl+d) Hello! $ So, I thought that was a problem of my PYTHONPATH, which hasn't defined how to reach the mpi module (mpi.py). Well, I've searched for it in my installation directory, but I haven't found it! Can you suggest me some tests or something else in order to understand what's the trouble (and solve it)? Thank you! Luigi |
From: Luigi P. <lu...@la...> - 2006-06-26 08:17:07
|
OK, thank you guys! Setting up CC=mpicc everything works! Thank you again! Cheers Luigi Pat Miller wrote: > I think the issue is that the configure did not properly identify the > MPI C compiler. > > Try something like: > > % env CC="my_mpi_c_compiler" ./configure ...... > > where my_mpi_c_compiler is typically something like > > mpicc > mpiicc > mpiiicpc > ... > > The configure scripts guesses ones I know about, but sometimes cannot > intuit > how to build a valid MPI program from C. > > Pat > |
From: Pat M. <pat...@ll...> - 2006-06-23 21:18:05
|
This is my next to last week here at Lawrence Livermore National Laboratory. I'll be starting a job in New York City in the middle of July. The new work involves Python and parallelism, so pyMPI will still be supported and enhanced. My sourceforge email will continue to work through the transition. Cheers! Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Laws are the spider's webs which, if anything small falls into them they ensnare it, but large things break through and escape. -- Solon |
From: Pat M. <pat...@ll...> - 2006-06-23 21:16:41
|
I think the issue is that the configure did not properly identify the MPI C compiler. Try something like: % env CC="my_mpi_c_compiler" ./configure ...... where my_mpi_c_compiler is typically something like mpicc mpiicc mpiiicpc ... The configure scripts guesses ones I know about, but sometimes cannot intuit how to build a valid MPI program from C. Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Laws are the spider's webs which, if anything small falls into them they ensnare it, but large things break through and escape. -- Solon |
From: Julian C. <jul...@ya...> - 2006-06-23 20:42:58
|
There should not be any difference between mpich, where, for example, the function isdeclared as: int MPI_Finalize(void); and openmpi, shown here: OMPI_DECLSPEC int MPI_Finalize(void); ..because it looks as though OMPI_DECLSPEC resolves to "", if the platform is not win32. So I would guess that the difference in the function declarations should not matter. So it looks as though your configure fails here: configure:6352: checking for MPI capability configure:6372: /usr/bin/gcc -o conftest -g -O2 -I/antigone/luigi/work/software/mpi/install/include -I/usr/include/python2.4 conftest.c -lm -L/usr/lib/python2.4/config -lpython2.4 -Xlinker -export-dynamic -lpthread -ldl -lutil -lm >&5 /tmp/ccEJ7LmX.o: In function `main': /antigone/luigi/work/software/mpi/pyMPI-2.4b4/configure:6363: undefined reference to `MPI_Init' collect2: ld returned 1 exit status configure:6375: $? = 1 configure: failed program was: #line 6354 "configure" #include "confdefs.h" #include "mpi.h" #include <stdlib.h> int main () { MPI_Init(NULL,NULL); ; return 0; } I am not sure whether it decides that mpi.h should be included from from the current directory, or whether it is looking in the include path i.e. /antigone/luigi/work/software/mpi/install/include , but in any case, I don't think that it is actually finding the declaration of any MPI function. First I would suggest greping for the function i.e. grep -n MPI_Finalize /antigone/luigi/work/software/mpi/install/include/*.h This will at least tell you if the path contains the function in any h file. If that works, then somehow the include is failing in the configure phase. You usually need to run configure with --with-includes='-Ifoo ...' to additional include paths, however if it could not find the mpi.h file at all, you should have got a different error e.g. Cannot build without mpi headers. use --with-includes=-I/... In you case it seems to have found it here: configure:4925: checking for --with-includes configure:4932: result: -I/antigone/luigi/work/software/mpi/install/include Also, I'm suprised that you are using gcc, you should be using the openmpi compiler, i.e. your make file has something like: CC = /home/jcook/mpich/mpich-1.2.6/bin/mpicc Usually I run my configure like this: CC=mpicc ./configure --prefix=/home/jcook/python/Python-2.4/bin --with-isatty So I would try changing the compiler first.. Julian Cook ----- Original Message ---- From: Luigi Paioro <lu...@la...> To: Julian Cook <jul...@ya...> Sent: Friday, June 23, 2006 5:14:43 AM Subject: Re: [Pympi-users] Compile problem with OpenMPI Hi Julian. Thank you for your quick answer. mpi.h file has a declaration: OMPI_DECLSPEC int MPI_Finalize(void); and OMPI_DECLSPEC int MPI_Init(int *argc, char ***argv); I attach the mpi.h file so you can look in it. Luigi Julian Cook wrote: > Luigi > > I looked at your config file. All the references to MPI_ type calls seem > to be missing starting with > > undefined reference to `MPI_Init' > > > > > > it seems to be looking here: > > -I/antigone/luigi/work/software/mpi/install/include > > > > and seems to find mpi.h, but none of the MPI C functions appear to be found > > > > ----- Original Message ---- > From: Luigi Paioro <lu...@la...> > To: pym...@li... > Sent: Thursday, June 22, 2006 4:47:50 AM > Subject: [Pympi-users] Compile problem with OpenMPI > > Hello! > > I'm trying to compile pyMPI with OpenMPI (1.2a) MPI implementation. I've > got many errors like: > > <path>/pyMPI_main.c:76: undefined reference to `MPI_Finalize' > > I attach my config.log and "make clean all install" command output in > order to help you to understand my problem. > > Can you help me? > > Thank you in advance. > > Luigi > All the advantages of Linux Managed Hosting--Without the Cost and Risk! > Fully trained technicians. The highest number of Red Hat certifications in > the hosting industry. Fanatical Support. Click to learn more > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642 > <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642> > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > |
From: Luigi P. <lu...@la...> - 2006-06-22 08:47:57
|
Hello! I'm trying to compile pyMPI with OpenMPI (1.2a) MPI implementation. I've got many errors like: <path>/pyMPI_main.c:76: undefined reference to `MPI_Finalize' I attach my config.log and "make clean all install" command output in order to help you to understand my problem. Can you help me? Thank you in advance. Luigi |
From: Julian C. <rjc...@cs...> - 2006-02-22 01:29:04
|
[removed HTML formatting] Sorry for the delay in replying. Pat Miller is better qualified to answer this question, but I will try to outline the answer. pympi is itself a special version of the python interpreter that is mpi aware. Most users are specifically using pympi, because it allows them to AVOID having to write parallel programs in a low level language like C. pympi is mpi aware, because it initialises the MPI processes at startup. You access them through the loading of the mpi module with this statement: >>> import mpi There is an example of the other approach, where the mpi calls are made in a compiled module here: http://pympi.sourceforge.net/examples.html See the simple extension example. This is actually a python extension that also includes mpi calls using the MPI C api directly without going through python, for instance it uses the C api to get it's own rank: MPI_Comm_rank(MPI_COMM_WORLD,&rank); I have not used the C api myself from pympi, for the reasons I stated at the beginning: regards Julian Cook |
From: Julian C. <rjc...@cs...> - 2006-02-22 00:36:59
|
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#ffffff" text="#000000"> Sorry for the delay in replying. Pat Miller is better qualified to answer this question, but I will try to outline the answer.<br> <br> pympi is itself a special version of the python interpreter that is mpi aware. Most users are specifically using pympi, because it allows them to AVOID having to write parallel programs in a low level language like C.<br> <br> pympi is mpi aware, because it initialises the MPI processes at startup. You access them through the loading of the mpi module with this statement:<br> <b>>>> import </b>mpi<br> <br> There is an example of the other approach, where the mpi calls are made in a compiled module here:<br> <br> <a class="moz-txt-link-freetext" href="http://pympi.sourceforge.net/examples.html">http://pympi.sourceforge.net/examples.html</a><br> <br> See the simple extension example. This is actually a python extension that also includes mpi calls using the MPI C api directly without going through python, for instance it uses the C api to get it's own rank:<br> <br> <pre><span class="csyntax9">MPI_Comm_rank</span>(MPI_COMM_WORLD,<span class="csyntax11">&</span>rank); </pre> I have not used the C api myself from pympi, for the reasons I stated at the beginning:<br> <br> regards<br> <br> Julian Cook<br> </body> </html> |
From: Julian C. <rjc...@cs...> - 2006-02-21 12:56:20
|
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> <title></title> </head> <body bgcolor="#ffffff" text="#000000"> Sorry for the delay in replying. Pat Miller is better qualified to answer this question, but I will try to outline the answer.<br> <br> pympi is itself a special version of the python interpreter that is mpi aware. Most users are specifically using pympi, because it allows them to AVOID having to write parallel programs in a low level language like C.<br> <br> pympi is mpi aware, because it initialises the MPI processes at startup. You access them through the loading of the mpi module with this statement:<br> <b>>>> import </b>mpi<br> <br> There is an example of the other approach, where the mpi calls are made in a compiled module here:<br> <br> <a class="moz-txt-link-freetext" href="http://pympi.sourceforge.net/examples.html">http://pympi.sourceforge.net/examples.html</a><br> <br> See the simple extension example. This is actually a python extension that also includes mpi calls using the MPI C api directly without going through python, for instance it uses the C api to get it's own rank:<br> <br> <pre><span class="csyntax9">MPI_Comm_rank</span>(MPI_COMM_WORLD,<span class="csyntax11">&</span>rank); </pre> I have not used the C api myself from pympi, for the reasons I stated at the beginning:<br> <br> regards<br> <br> Julian Cook<br> <a class="moz-txt-link-abbreviated" href="mailto:cha...@gm...">cha...@gm...</a> wrote: <blockquote cite="mid...@mx..." type="cite"> <meta http-equiv="Content-Type" content="text/html; "> <meta name="Generator" content="Microsoft Word 11 (filtered medium)"> <style> <!-- /* Font Definitions */ @font-face {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} @font-face {font-family:"Bookman Old Style"; panose-1:2 5 6 4 5 5 5 2 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:10.0pt; font-family:Tahoma;} h1 {margin-top:12.0pt; margin-right:0in; margin-bottom:3.0pt; margin-left:0in; text-align:center; page-break-after:avoid; font-size:16.0pt; font-family:"Bookman Old Style";} h2 {margin-top:12.0pt; margin-right:0in; margin-bottom:3.0pt; margin-left:0in; page-break-after:avoid; border:none; padding:0in; font-size:14.0pt; font-family:Tahoma; font-weight:normal;} h3 {margin-top:12.0pt; margin-right:0in; margin-bottom:3.0pt; margin-left:0in; page-break-after:avoid; font-size:12.0pt; font-family:Tahoma;} a:link, span.MsoHyperlink {color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {color:purple; text-decoration:underline;} p.EmphasizedNormal, li.EmphasizedNormal, div.EmphasizedNormal {margin:0in; margin-bottom:.0001pt; font-size:10.0pt; font-family:Tahoma; font-weight:bold;} p.ContactAddress, li.ContactAddress, div.ContactAddress {margin:0in; margin-bottom:.0001pt; font-size:8.0pt; font-family:Tahoma;} p.EmailAddress, li.EmailAddress, div.EmailAddress {margin:0in; margin-bottom:.0001pt; font-size:8.0pt; font-family:Tahoma; color:navy; text-decoration:underline;} p.BulletedNormal, li.BulletedNormal, div.BulletedNormal {margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.45in; margin-bottom:.0001pt; text-indent:-.2in; mso-list:l0 level1 lfo4; font-size:10.0pt; font-family:Tahoma;} span.EmailStyle21 {mso-style-type:personal-compose; font-family:Arial; color:windowtext;} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in;} div.Section1 {page:Section1;} /* List Definitions */ @list l0 {mso-list-id:2032293582; mso-list-type:hybrid; mso-list-template-ids:58999394 -1258128262 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;} @list l0:level1 {mso-level-number-format:bullet; mso-level-style-link:"Bulleted Normal"; mso-level-text:\F0B7; mso-level-tab-stop:.45in; mso-level-number-position:left; margin-left:.45in; text-indent:-.2in; font-family:Symbol;} ol {margin-bottom:0in;} ul {margin-bottom:0in;} --> </style> <div class="Section1"> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">I read about the PyMPI tool and I found it to be very interesting. I was wondering if I could use this tool for my purpose. It would be great if you could suggest a solution after I describe the problem below.<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">I am dealing with Software called E-cell. The structure of the software goes like this,<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Core Layer ------------------------------------------------</span></font><font face="Wingdings"><span style="font-family: Wingdings;">à</span></font><font face="Arial"><span style="font-family: Arial;"> C++<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Interface Layer ------------------------------------------------</span></font><font face="Wingdings"><span style="font-family: Wingdings;">à</span></font><font face="Arial"><span style="font-family: Arial;"> Python<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Uses Boost library.<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">When the software is installed (using make command), all c++ files are compiled into a .so.2 file. Then Python code references this .so.2 file to make use of the functions in C++. The main exists in Python and not in C++. We want to parallelize this software. So we changed the compiler from gcc to mpicc. Since Python is the interface layer we are still not able to initialize MPI and use it in the C++ program. Will installing PyMPI solve our problem? All the mpi coding that we are going to perform will be done in a C++ file. But ultimately python will reference them. Python acts as a master and uses the functions in C++. <o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Mpicc can compile c++ programs.<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Can PyMPI also compile C++ programs?<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Can you please let us know if installing PyMPI can solve our problem?<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Thanks,<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;">Chandu<o:p></o:p></span></font></p> <p class="MsoNormal"><font face="Arial" size="2"><span style="font-size: 10pt; font-family: Arial;"><o:p> </o:p></span></font></p> </div> </blockquote> <br> </body> </html> |
From: Chandrasekar S. <cha...@gm...> - 2006-02-04 18:59:00
|
I read about the PyMPI tool and I found it to be very interesting. I was wondering if I could use this tool for my purpose. It would be great if you could suggest a solution after I describe the problem below. I am dealing with Software called E-cell. The structure of the software goes like this, Core Layer --------------------------------------------------> C++ Interface Layer --------------------------------------------------> Python Uses Boost library. When the software is installed (using make command), all c++ files are compiled into a .so.2 file. Then Python code references this .so.2 file to make use of the functions in C++. The main exists in Python and not in C++. We want to parallelize this software. So we changed the compiler from gcc to mpicc. Since Python is the interface layer we are still not able to initialize MPI and use it in the C++ program. Will installing PyMPI solve our problem? All the mpi coding that we are going to perform will be done in a C++ file. But ultimately python will reference them. Python acts as a master and uses the functions in C++. Mpicc can compile c++ programs. Can PyMPI also compile C++ programs? Can you please let us know if installing PyMPI can solve our problem? Thanks, Chandu |
From: <db...@br...> - 2005-09-26 22:35:32
|
Thanks! Now working. Details below. > Doug, > did you change > > int isatty(int filedes) __THROW { > return (filedes =3D=3D 0 || filedes =3D=3D 1 || filedes =3D=3D 2); > } > > to > > int isatty(int filedes) { > return (filedes =3D=3D 0 || filedes =3D=3D 1 || filedes =3D=3D 2); > } Originally, I had just deleted those lines. But I tried your safe version below and that gave an error in the make: pyMPI_isatty.c:43: Warning: FORMAT ERROR Traceback (most recent call last): File "./utils/grind_docs_and_prototypes.py", line 530, in ? actor.check(source,file) File "./utils/grind_docs_and_prototypes.py", line 179, in check method(kind,name,file,line,follow_line,*arguments) File "./utils/grind_docs_and_prototypes.py", line 306, in GLOBAL raise ValueError,follow ValueError: int isatty(int filedes) make[1]: *** [pyMPI_Externals.h] Error 1 make[1]: Leaving directory `/home/setup/pyMPI-2.4b3' make: *** [all] Error 2 Then I tried the version above without the __THROW, and that worked. Hope that gives you some hints on how you can write that portably. Thanks again, -Doug > I should likely change it to something safer (if less readable) like: > > > int isatty(int filedes) > #ifdef __THROW > __THROW > #endif > { > return (filedes =3D=3D 0 || filedes =3D=3D 1 || filedes =3D=3D 2); > } > > The test suite (from make check which is more extensive than PyMPITest.= py > doesn't do a good job of testing interactive (and indeed, I diddn't bui= ld > a version --with-isatty when I did the release test!). I'm not sure ho= w > to do that automatically. > > Pat > > |
From: Pat M. <pat...@ll...> - 2005-09-26 21:56:10
|
> I was able to compile pyMPI once I took out the offending lines, as you suggested below. > > I actually had something that worked between two particular machines for a short while, but I broke it again. I'm in the middle of upgrading the entire cluster, so I'll make sure that I have everything the same on all machines. > > There are so many places that can cause havoc: selinux, iptables, rsh, lam, mpirun, python... Does anyone have a step-by-step check list for getting up and running, especially on a Fedora Core 4 machine? I'll help write it, if I ever get it working... Doug, did you change int isatty(int filedes) __THROW { return (filedes == 0 || filedes == 1 || filedes == 2); } to int isatty(int filedes) { return (filedes == 0 || filedes == 1 || filedes == 2); } I should likely change it to something safer (if less readable) like: int isatty(int filedes) #ifdef __THROW __THROW #endif { return (filedes == 0 || filedes == 1 || filedes == 2); } The test suite (from make check which is more extensive than PyMPITest.py doesn't do a good job of testing interactive (and indeed, I diddn't build a version --with-isatty when I did the release test!). I'm not sure how to do that automatically. Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller I have discovered that all human evil comes from this, man's being unable to sit still in a room. -- Blaise Pascal, philosopher & mathematician (1623-1662) |
From: Douglas S. B. <db...@br...> - 2005-09-26 21:47:19
|
Thanks, Julian, for the help and the offer. An update: I was able to compile pyMPI once I took out the offending lines, as you=20 suggested below. I actually had something that worked between two particular machines for=20 a short while, but I broke it again. I'm in the middle of upgrading the=20 entire cluster, so I'll make sure that I have everything the same on all=20 machines. There are so many places that can cause havoc: selinux, iptables, rsh,=20 lam, mpirun, python... Does anyone have a step-by-step check list for=20 getting up and running, especially on a Fedora Core 4 machine? I'll help=20 write it, if I ever get it working... -Doug Julian Cook wrote: > Doug >=20 > I'm not really sure why it's executing that section. From your configur= e you > have: >=20 > checking for pm_util.h... no > checking for mpc_flush... no > checking for mpc_isatty... no >=20 > Which suggests that you should be compiling with the PYMPI_ISATTY secti= on. >=20 > If in doubt you could try compiling after removing the MPC section and = test > that. Otherwise let me know. I'm actually about 5 miles from you > (Conshohocken), so we can compare the configure/ compile process over t= he > phone if you get stuck. >=20 > -----Original Message----- > From: db...@br... [mailto:db...@br...] > Sent: Monday, September 26, 2005 8:42 AM > To: rjc...@cs... > Cc: db...@br...; pym...@li... > Subject: RE: [Pympi-users] pyMPI interactively >=20 >=20 > Thanks, this sounds like it is the problem. However, when I compile it,= I > get the compilation error: >=20 > pyMPI_isatty.c:48: error: syntax error before =91{=92 token > /usr/bin/mpicc: No such file or directory > make[1]: *** [pyMPI_isatty.o] Error 1 > make[1]: Leaving directory `/home/setup/pyMPI-2.4b3' > make: *** [all] Error 2 >=20 > Here is what I get from configure: >=20 > checking build system type... i686-pc-linux-gnu > checking host system type... i686-pc-linux-gnu > checking for a BSD-compatible install... /usr/bin/install -c > checking whether build environment is sane... yes > checking for gawk... gawk > checking whether make sets ${MAKE}... yes > checking for ranlib... ranlib > checking host overrides... no > checking fatal error on cancel of isend (--with-bad-cancel)... no > checking Assume stdin is interactive (--with-isatty)... yes > checking Append a newline to prompt (--with-prompt-nl)... > checking for mpcc... no > checking for mpxlc... no > checking for mpiicc... no > checking for mpicc... mpicc > checking for C compiler default output... a.out > checking whether the C compiler works... yes > checking whether we are cross compiling... yes > checking for suffix of executables... > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether mpicc accepts -g... yes > checking for mpicc option to accept ANSI C... none needed > checking for style of include used by make... GNU > checking dependency style of mpicc... none > checking for an ANSI C-conforming const... yes > checking for mpicc is really C++... checking how to run the C > preprocessor... mpicc -E > checking for egrep... grep -E > no > checking for sed... /bin/sed > checking for grep... /bin/grep > checking for mpiCC... mpiCC > checking whether we are using the GNU C++ compiler... yes > checking whether mpiCC accepts -g... yes > checking dependency style of mpiCC... none > checking for mpicc... /usr/bin/mpicc > checking for mpiCC... /usr/bin/mpiCC > checking if /usr/bin/mpicc -E -w is a valid CPP... yes > checking how to run the C preprocessor... /usr/bin/mpicc -E -w > checking for --with-python... no > checking executable /usr/bin/python2.4... yes > checking for Python... /usr/bin/python2.4 > checking for MPIRun.exe... no > checking for mpirun... /usr/bin/mpirun > checking for poe... no > checking Python version 2.2 or higher... yes > checking distutils?... yes > checking distutils works... yes > checking Numeric?... yes > checking Numarray?... > checking Python version string... 2.4 > checking install prefix for /usr/bin/python2.4... /usr > checking Prefix exists...... yes > checking for python include location... /usr/include/python2.4 > checking that include directory exists... yes > checking for python library location... /usr/lib/python2.4/site-package= s > checking that lib directory is accessable... yes > checking Python library... /usr/lib/python2.4 > checking site.py... /usr/lib/python2.4/site.py > checking site-packages... /usr/lib/python2.4/site-packages > checking for python lib/config location... /usr/lib/python2.4/config > checking that lib/config directory is accessable... yes > checking libpython2.4.a is there... yes > checking configuration Makefile is there... yes > checking module configuration table is there... yes > checking original Python there... yes > checking for --with-includes... no > checking for compiler based include directory... no > checking MPI_COMPILE_FLAGS... no > checking MPI_LD_FLAGS... no > checking for ANSI C header files... yes > checking for sys/types.h... yes > checking for sys/stat.h... yes > checking for stdlib.h... yes > checking for string.h... yes > checking for memory.h... yes > checking for strings.h... yes > checking for inttypes.h... yes > checking for stdint.h... yes > checking for unistd.h... yes > checking mpi.h usability... yes > checking mpi.h presence... yes > checking for mpi.h... yes > checking Python.h usability... yes > checking Python.h presence... yes > checking for Python.h... yes > checking Python CC... gcc -pthread > checking Python CFLAGS... -fno-strict-aliasing -DNDEBUG -O2 -g -pipe > -Wp,-D_FORTIFY_SOURCE=3D2 -fexceptions -m32 -march=3Di386 -mtune=3Dpent= ium4 > -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC > checking Python INCLUDEPY... /usr/include/python2.4 > checking Python OPT... -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=3D2 > -fexceptions -m32 -march=3Di386 -mtune=3Dpentium4 -fasynchronous-unwind= -tables > -D_GNU_SOURCE -fPIC > checking Python LDFLAGS... -L/usr/kerberos/lib > checking Python LINKFORSHARED... -Xlinker -export-dynamic > checking Python LDSHARED... gcc -pthread -shared > checking Python BLDSHARED... gcc -pthread -shared > checking Python LOCALMODLIBS... > checking Python BASEMODLIBS... > checking Python LIBS... -lpthread -ldl -lutil > checking Python LDLAST... > checking Python library options... -L/usr/lib/python2.4/config > -lpython2.4 -L/usr/kerberos/lib -Xlinker -export-dynamic -lpthread -l= dl > -lutil > checking for --with-dbfork... no > checking for --with-debug... no > checking python.exp file... no > checking sysconf(_SC_NPROCESSORS_CONF)... yes > checking for ANSI C header files... (cached) yes > checking local processor count for testing... 2 > checking for --with-libs... no > checking for pow in -lm... yes > checking for PyOS_StdioReadline... yes > checking for setlinebuf... yes > checking sys/param.h usability... yes > checking sys/param.h presence... yes > checking for sys/param.h... yes > checking Python links as is... yes > checking for MPI capability... yes > checking for Py_ReadOnlyBytecodeFlag... no > checking for MPI_Initialized()... yes > checking for MPI_Finalized()... yes > checking for MPI File operations (ROMIO)... yes > checking for AIX dynamic load... no > checking pm_util.h usability... no > checking pm_util.h presence... no > checking for pm_util.h... no > checking for mpc_flush... no > checking for mpc_isatty... no > checking for Electric Fence enabled?... no >=20 > I suspect that I am missing something (mpc_isatty?) and will poke aroun= d > to see what I need to do to get it. >=20 > -Doug >=20 >=20 >>I had this problem with pyMPI-2.0b0, at which point I was directed to u= se >>the CVS version, but the fix should be in b4. The other missing piece i= s >>the >>compile options, in that configure needs to be run: >> >>./configure --prefix=3D/usr --with-isatty >> >>Obviously --prefix=3D/usr depends on where the ultimate install will be= , >>but --with-isatty makes sure that the interactive console will work. Th= ere >>are some posts in November 2004 (see Julian cook, Mike Steder) that >>discuss >>a similar problem. There is another configure option with regard to >>Newlines, but normally that is not needed. Otherwise on Monday Pat Mill= er >>should be able to give better advice. >> >>Julian >=20 >=20 >=20 >=20 >=20 >=20 > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server.=20 > Download it for free - -and be entered to win a 42" plasma tv or your v= ery > own Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.p= hp > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users >=20 >=20 --=20 Douglas S. Blank, Assistant Professor db...@br..., (610)526-6501 Bryn Mawr College, Computer Science Program 101 North Merion Ave, Park Science Bld. Bryn Mawr, PA 19010 dangermouse.brynmawr.edu |
From: Julian C. <rjc...@cs...> - 2005-09-26 14:52:11
|
Doug I'm not really sure why it's executing that section. From your configure you have: checking for pm_util.h... no checking for mpc_flush... no checking for mpc_isatty... no Which suggests that you should be compiling with the PYMPI_ISATTY section. If in doubt you could try compiling after removing the MPC section and test that. Otherwise let me know. I'm actually about 5 miles from you (Conshohocken), so we can compare the configure/ compile process over the phone if you get stuck. -----Original Message----- From: db...@br... [mailto:db...@br...] Sent: Monday, September 26, 2005 8:42 AM To: rjc...@cs... Cc: db...@br...; pym...@li... Subject: RE: [Pympi-users] pyMPI interactively Thanks, this sounds like it is the problem. However, when I compile it, I get the compilation error: pyMPI_isatty.c:48: error: syntax error before { token /usr/bin/mpicc: No such file or directory make[1]: *** [pyMPI_isatty.o] Error 1 make[1]: Leaving directory `/home/setup/pyMPI-2.4b3' make: *** [all] Error 2 Here is what I get from configure: checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets ${MAKE}... yes checking for ranlib... ranlib checking host overrides... no checking fatal error on cancel of isend (--with-bad-cancel)... no checking Assume stdin is interactive (--with-isatty)... yes checking Append a newline to prompt (--with-prompt-nl)... checking for mpcc... no checking for mpxlc... no checking for mpiicc... no checking for mpicc... mpicc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... yes checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether mpicc accepts -g... yes checking for mpicc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of mpicc... none checking for an ANSI C-conforming const... yes checking for mpicc is really C++... checking how to run the C preprocessor... mpicc -E checking for egrep... grep -E no checking for sed... /bin/sed checking for grep... /bin/grep checking for mpiCC... mpiCC checking whether we are using the GNU C++ compiler... yes checking whether mpiCC accepts -g... yes checking dependency style of mpiCC... none checking for mpicc... /usr/bin/mpicc checking for mpiCC... /usr/bin/mpiCC checking if /usr/bin/mpicc -E -w is a valid CPP... yes checking how to run the C preprocessor... /usr/bin/mpicc -E -w checking for --with-python... no checking executable /usr/bin/python2.4... yes checking for Python... /usr/bin/python2.4 checking for MPIRun.exe... no checking for mpirun... /usr/bin/mpirun checking for poe... no checking Python version 2.2 or higher... yes checking distutils?... yes checking distutils works... yes checking Numeric?... yes checking Numarray?... checking Python version string... 2.4 checking install prefix for /usr/bin/python2.4... /usr checking Prefix exists...... yes checking for python include location... /usr/include/python2.4 checking that include directory exists... yes checking for python library location... /usr/lib/python2.4/site-packages checking that lib directory is accessable... yes checking Python library... /usr/lib/python2.4 checking site.py... /usr/lib/python2.4/site.py checking site-packages... /usr/lib/python2.4/site-packages checking for python lib/config location... /usr/lib/python2.4/config checking that lib/config directory is accessable... yes checking libpython2.4.a is there... yes checking configuration Makefile is there... yes checking module configuration table is there... yes checking original Python there... yes checking for --with-includes... no checking for compiler based include directory... no checking MPI_COMPILE_FLAGS... no checking MPI_LD_FLAGS... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking mpi.h usability... yes checking mpi.h presence... yes checking for mpi.h... yes checking Python.h usability... yes checking Python.h presence... yes checking for Python.h... yes checking Python CC... gcc -pthread checking Python CFLAGS... -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC checking Python INCLUDEPY... /usr/include/python2.4 checking Python OPT... -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC checking Python LDFLAGS... -L/usr/kerberos/lib checking Python LINKFORSHARED... -Xlinker -export-dynamic checking Python LDSHARED... gcc -pthread -shared checking Python BLDSHARED... gcc -pthread -shared checking Python LOCALMODLIBS... checking Python BASEMODLIBS... checking Python LIBS... -lpthread -ldl -lutil checking Python LDLAST... checking Python library options... -L/usr/lib/python2.4/config -lpython2.4 -L/usr/kerberos/lib -Xlinker -export-dynamic -lpthread -ldl -lutil checking for --with-dbfork... no checking for --with-debug... no checking python.exp file... no checking sysconf(_SC_NPROCESSORS_CONF)... yes checking for ANSI C header files... (cached) yes checking local processor count for testing... 2 checking for --with-libs... no checking for pow in -lm... yes checking for PyOS_StdioReadline... yes checking for setlinebuf... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking Python links as is... yes checking for MPI capability... yes checking for Py_ReadOnlyBytecodeFlag... no checking for MPI_Initialized()... yes checking for MPI_Finalized()... yes checking for MPI File operations (ROMIO)... yes checking for AIX dynamic load... no checking pm_util.h usability... no checking pm_util.h presence... no checking for pm_util.h... no checking for mpc_flush... no checking for mpc_isatty... no checking for Electric Fence enabled?... no I suspect that I am missing something (mpc_isatty?) and will poke around to see what I need to do to get it. -Doug > I had this problem with pyMPI-2.0b0, at which point I was directed to use > the CVS version, but the fix should be in b4. The other missing piece is > the > compile options, in that configure needs to be run: > > ./configure --prefix=/usr --with-isatty > > Obviously --prefix=/usr depends on where the ultimate install will be, > but --with-isatty makes sure that the interactive console will work. There > are some posts in November 2004 (see Julian cook, Mike Steder) that > discuss > a similar problem. There is another configure option with regard to > Newlines, but normally that is not needed. Otherwise on Monday Pat Miller > should be able to give better advice. > > Julian |
From: <db...@br...> - 2005-09-26 12:42:11
|
Thanks, this sounds like it is the problem. However, when I compile it, I get the compilation error: pyMPI_isatty.c:48: error: syntax error before =91{=92 token /usr/bin/mpicc: No such file or directory make[1]: *** [pyMPI_isatty.o] Error 1 make[1]: Leaving directory `/home/setup/pyMPI-2.4b3' make: *** [all] Error 2 Here is what I get from configure: checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets ${MAKE}... yes checking for ranlib... ranlib checking host overrides... no checking fatal error on cancel of isend (--with-bad-cancel)... no checking Assume stdin is interactive (--with-isatty)... yes checking Append a newline to prompt (--with-prompt-nl)... checking for mpcc... no checking for mpxlc... no checking for mpiicc... no checking for mpicc... mpicc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... yes checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether mpicc accepts -g... yes checking for mpicc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of mpicc... none checking for an ANSI C-conforming const... yes checking for mpicc is really C++... checking how to run the C preprocessor... mpicc -E checking for egrep... grep -E no checking for sed... /bin/sed checking for grep... /bin/grep checking for mpiCC... mpiCC checking whether we are using the GNU C++ compiler... yes checking whether mpiCC accepts -g... yes checking dependency style of mpiCC... none checking for mpicc... /usr/bin/mpicc checking for mpiCC... /usr/bin/mpiCC checking if /usr/bin/mpicc -E -w is a valid CPP... yes checking how to run the C preprocessor... /usr/bin/mpicc -E -w checking for --with-python... no checking executable /usr/bin/python2.4... yes checking for Python... /usr/bin/python2.4 checking for MPIRun.exe... no checking for mpirun... /usr/bin/mpirun checking for poe... no checking Python version 2.2 or higher... yes checking distutils?... yes checking distutils works... yes checking Numeric?... yes checking Numarray?... checking Python version string... 2.4 checking install prefix for /usr/bin/python2.4... /usr checking Prefix exists...... yes checking for python include location... /usr/include/python2.4 checking that include directory exists... yes checking for python library location... /usr/lib/python2.4/site-packages checking that lib directory is accessable... yes checking Python library... /usr/lib/python2.4 checking site.py... /usr/lib/python2.4/site.py checking site-packages... /usr/lib/python2.4/site-packages checking for python lib/config location... /usr/lib/python2.4/config checking that lib/config directory is accessable... yes checking libpython2.4.a is there... yes checking configuration Makefile is there... yes checking module configuration table is there... yes checking original Python there... yes checking for --with-includes... no checking for compiler based include directory... no checking MPI_COMPILE_FLAGS... no checking MPI_LD_FLAGS... no checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking mpi.h usability... yes checking mpi.h presence... yes checking for mpi.h... yes checking Python.h usability... yes checking Python.h presence... yes checking for Python.h... yes checking Python CC... gcc -pthread checking Python CFLAGS... -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=3D2 -fexceptions -m32 -march=3Di386 -mtune=3Dpentiu= m4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC checking Python INCLUDEPY... /usr/include/python2.4 checking Python OPT... -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=3D2 -fexceptions -m32 -march=3Di386 -mtune=3Dpentium4 -fasynchronous-unwind-t= ables -D_GNU_SOURCE -fPIC checking Python LDFLAGS... -L/usr/kerberos/lib checking Python LINKFORSHARED... -Xlinker -export-dynamic checking Python LDSHARED... gcc -pthread -shared checking Python BLDSHARED... gcc -pthread -shared checking Python LOCALMODLIBS... checking Python BASEMODLIBS... checking Python LIBS... -lpthread -ldl -lutil checking Python LDLAST... checking Python library options... -L/usr/lib/python2.4/config -lpython2.4 -L/usr/kerberos/lib -Xlinker -export-dynamic -lpthread -ldl= =20 -lutil checking for --with-dbfork... no checking for --with-debug... no checking python.exp file... no checking sysconf(_SC_NPROCESSORS_CONF)... yes checking for ANSI C header files... (cached) yes checking local processor count for testing... 2 checking for --with-libs... no checking for pow in -lm... yes checking for PyOS_StdioReadline... yes checking for setlinebuf... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking Python links as is... yes checking for MPI capability... yes checking for Py_ReadOnlyBytecodeFlag... no checking for MPI_Initialized()... yes checking for MPI_Finalized()... yes checking for MPI File operations (ROMIO)... yes checking for AIX dynamic load... no checking pm_util.h usability... no checking pm_util.h presence... no checking for pm_util.h... no checking for mpc_flush... no checking for mpc_isatty... no checking for Electric Fence enabled?... no I suspect that I am missing something (mpc_isatty?) and will poke around to see what I need to do to get it. -Doug > I had this problem with pyMPI-2.0b0, at which point I was directed to u= se > the CVS version, but the fix should be in b4. The other missing piece i= s > the > compile options, in that configure needs to be run: > > ./configure --prefix=3D/usr --with-isatty > > Obviously --prefix=3D/usr depends on where the ultimate install will be= , > but --with-isatty makes sure that the interactive console will work. Th= ere > are some posts in November 2004 (see Julian cook, Mike Steder) that > discuss > a similar problem. There is another configure option with regard to > Newlines, but normally that is not needed. Otherwise on Monday Pat Mill= er > should be able to give better advice. > > Julian |