pympi-users Mailing List for MPI Python
Status: Alpha
Brought to you by:
patmiller
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
(3) |
Dec
(8) |
2005 |
Jan
(9) |
Feb
(4) |
Mar
(3) |
Apr
(1) |
May
|
Jun
(2) |
Jul
(16) |
Aug
(11) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
(7) |
Aug
(2) |
Sep
|
Oct
|
Nov
(7) |
Dec
(4) |
2007 |
Jan
|
Feb
|
Mar
(1) |
Apr
(4) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Julian C. <jul...@ya...> - 2009-02-06 12:06:39
|
I have not actually looked at the code involved but: At a guess, pympi was originally built to work with mpich 1.x at LLNL. Theoretically it will compile with later versions of MPI or Open MPI, but it may simply not recognise the version in question and so is asserting. I think it will likely run if the assert could be removed (from the code). Julian Cook ----- Original Message ---- From: Luigi Paioro <lu...@la...> To: pym...@li... Sent: Thursday, February 5, 2009 5:21:19 AM Subject: [Pympi-users] Problem with pyMPI and Open MPI Dear pyMPI developers, I've compiled pyMPI 2.5b0 with Open MPI 1.3 libraries. Everything seemed to work properly, but when I imported mpi module I resulted in this error: Python 2.5.1 (pyMPI 2.5b0) on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mpi pyMPI_init.c: 482 Assertion version == MPI_VERSION && subversion == MPI_SUBVERSION failed at line 482 What's the problem? Can you give me a hint? Thanks. -- Luigi Paioro INAF - IASF Milano Via Bassini 15, I-20133 Milano, Italy Phone (+39) 02 23 699 470 Fax (+39) 02 26 660 17 Site http://www.iasf-milano.inaf.it/luigi/ ------------------------------------------------------------------------------ Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM) software. With Adobe AIR, Ajax developers can use existing skills and code to build responsive, highly engaging applications that combine the power of local resources and data with the reach of the web. Download the Adobe AIR SDK and Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users |
From: Luigi P. <lu...@la...> - 2009-02-05 10:59:39
|
Dear pyMPI developers, I've compiled pyMPI 2.5b0 with Open MPI 1.3 libraries. Everything seemed to work properly, but when I imported mpi module I resulted in this error: Python 2.5.1 (pyMPI 2.5b0) on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mpi pyMPI_init.c: 482 Assertion version == MPI_VERSION && subversion == MPI_SUBVERSION failed at line 482 What's the problem? Can you give me a hint? Thanks. -- Luigi Paioro INAF - IASF Milano Via Bassini 15, I-20133 Milano, Italy Phone (+39) 02 23 699 470 Fax (+39) 02 26 660 17 Site http://www.iasf-milano.inaf.it/luigi/ |
From: Julian C. <jul...@ya...> - 2008-10-04 13:42:50
|
Ricardo pyMPI originally was built to work with MPICH 1. It was never designed to work with MPICH 2, but in most cases appears to run ok. Last conversation on this topic from Pat Miller (creator of pyMPI) was in late 2006: "I haven't build pyMPI with MPICH2 though I had expected that it would work. I have a parallel Fedora machine at home so I can try this over the weekend and let you know if I can repeat your results. Cheers, Pat" I have not talked to Pat Miller for some time and he has not made changes to pyMPI since he left LLNL. Your first error could probably be found by using strace (assuming you are on Linux, you didn't say). The 14hr error looks like it might be due to a memory leak - which could easily be in MPICH itself. The only way to prove that is to run a similar long-run C based MPICH2 process. You can actually attach to a running python process and debug it - you just need the right development tool. regards Julian Cook ----- Original Message ---- From: Ricardo Mata <ric...@gm...> To: pym...@li... Sent: Friday, October 3, 2008 6:30:40 AM Subject: [Pympi-users] make check fails Hello, I had two questions, although the answer to the first one could help on the second. I know the project hasn't been very active, but I would really appreciate any help I could get... I have recently downloaded pyMPI and installed it on an Intel Quad Core, with mpich2-1.0.7 on the machine. After installation, I tried to run "make check", but it just hangs on the "pyMPI_request_001" test, consuming 100% on one of the processors. So, the first question would be: where could this be coming from? Now the second question. Desperate as I was to get my project going, I used pyMPI on my python program, which worked out fine. Its the call for an external program, read information, do something, rinse and repeat kind of thing. So I only used gather and scatter commands for the communication. However, it always gets stuck after 14 hours or so. Up till then it does its job (correctly) for about 300 times. Is there any way to debug this? Or any clues on what could be causing this? (note: it hangs with four pyMPI processes running close to 100% on each processor, but with nothing happening. It has hung twice, each time at a different location). Best wishes, Ricardo ------------------------------------------------------------------------- This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world http://moblin-contest.org/redirect.php?banner_id=100&url=/ _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users |
From: Ricardo M. <ric...@gm...> - 2008-10-03 10:30:47
|
Hello, I had two questions, although the answer to the first one could help on the second. I know the project hasn't been very active, but I would really appreciate any help I could get... I have recently downloaded pyMPI and installed it on an Intel Quad Core, with mpich2-1.0.7 on the machine. After installation, I tried to run "make check", but it just hangs on the "pyMPI_request_001" test, consuming 100% on one of the processors. So, the first question would be: where could this be coming from? Now the second question. Desperate as I was to get my project going, I used pyMPI on my python program, which worked out fine. Its the call for an external program, read information, do something, rinse and repeat kind of thing. So I only used gather and scatter commands for the communication. However, it always gets stuck after 14 hours or so. Up till then it does its job (correctly) for about 300 times. Is there any way to debug this? Or any clues on what could be causing this? (note: it hangs with four pyMPI processes running close to 100% on each processor, but with nothing happening. It has hung twice, each time at a different location). Best wishes, Ricardo |
From: Jairo S. <jai...@gm...> - 2007-06-28 00:01:23
|
Hi pympi-users ! my question perhaps is not a real problem with pympi, is more about programming and complexity. I hope that you can help me ! The next piece of code work fine just one time: if (mpi.rank==0): for i in range(1,mpi.size): mpi.send(m_matrix,i,tag=98) dml=[] for l in range(1,mpi.size): m_menor,status=mpi.recv(tag=97) dml.append(m_menor) dmg=[] dmg.append(min(dml)) m_matrix=upgma(m_matrix,m_nom,dmg) for n in range(len(m_matrix[0])): print m_matrix[0][n] print m_matrix[1] else: e_matrix,status=mpi.recv(tag=98) vvif=vivf(e_matrix) print vvif[0],vvif[1],mpi.rank e_menor=upgmaNuc.menor(e_matrix,vvif[0],vvif[1]) mpi.send(e_menor,0,tag=97) I need to repeat "n times" the same code in master (rank=0) and exactly "n times" in slaves (rank!=0), but if i try to repeat "n times" it doesnt work, it return me "UnboundLocalError: local variable 'pos' referenced before assignment". I couldnt understand this error if (mpi.rank==0): for n in range(3): for i in range(1,mpi.size): mpi.send(m_matrix,i,tag=98) dml=[] for l in range(1,mpi.size): m_menor,status=mpi.recv(tag=97) dml.append(m_menor) dmg=[] dmg.append(min(dml)) m_matrix=upgma(m_matrix,m_nom,dmg) for n in range(len(m_matrix[0])): print m_matrix[0][n] print m_matrix[1] else: for n in range(3): e_matrix,status=mpi.recv(tag=98) vvif=vivf(e_matrix) e_menor=upgmaNuc.menor(e_matrix,vvif[0],vvif[1]) mpi.send(e_menor,0,tag=97) Thanks, -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 |
From: Jairo S. <jai...@gm...> - 2007-06-17 23:27:42
|
You need mpich !!! 2007/6/16, Jos=E9 Maria Silveira Neto <sil...@gm...>: > > Hi, I'm a new user of pympi. > I'm a computer science studant (in parallel and concurrent programming) > and an Python enthusiast. > I did a few tests with pympi with Parallel Knoppix (great liveCD distro) > that comes with pympi working. > > But now I need to install it in my own computer and computers of lab of m= y > university. But I'm not having sucess, therefore I'm here. :-) I'm using > Linux Ubuntu 7.04. > > I installed a some packages (openmpi, mpichpython, python-mpi, etc), I an= d > got some executables that I have used in PK (Parallel Python) like mpirun= , > mpicc, mpipython. But when I enter in mpipython > >>> import mpi > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > ImportError: No module named mpi > > Any sugestionss? > > -- > ------- > Silveira Neto > http://www.eupodiatamatando.com > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > --=20 jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 |
From: <sil...@gm...> - 2007-06-17 01:26:51
|
Hi, I'm a new user of pympi. I'm a computer science studant (in parallel and concurrent programming) and an Python enthusiast. I did a few tests with pympi with Parallel Knoppix (great liveCD distro) that comes with pympi working. But now I need to install it in my own computer and computers of lab of my university. But I'm not having sucess, therefore I'm here. :-) I'm using Linux Ubuntu 7.04. I installed a some packages (openmpi, mpichpython, python-mpi, etc), I and got some executables that I have used in PK (Parallel Python) like mpirun, mpicc, mpipython. But when I enter in mpipython >>> import mpi Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named mpi Any sugestionss? -- ------- Silveira Neto http://www.eupodiatamatando.com |
From: Julian C. <jul...@ya...> - 2007-05-30 20:42:49
|
Assuming this compiles correctly, the required flags for interactive are per this example: $ CC=mpicc; ./configure -prefix=<inst path> --with-includes=-I<mpi path>/include --with-isatty --with-prompt-nl Jairo Serrano <jai...@gm...> wrote: Ben, use pyMPI with mpirun or mpiexec like this: $mpiexec -n 8 pyMPI <a_python_program.py> cordially, Jairo Serrano 2007/5/30, Ben Hitt < bh...@wj...>:Hello all I have an 8 node MPICH2 cluster that the basic tests ( mpiexec -n 8 hostname, mpiexec -n 8 cpi and mpiexec -n 8 hellow) all work normally I've installed pyMPI and it runs when called as $pyMPI on all 8 machines. However when I run $mpiexec -n 8 pyMPI or mpirun -np 8 pyMPI no prompt, '$' or '>>>', is returned. When I query each node '$ssh <node> ps -e | grep py' they all return pyMPI as an active process. I've let the command sit for the better part of an hour with no response. The same thing happens irrespective of values given to -n. The use of machinefile likewise has no impact. Any ideas? Ben -- Ben A. Hitt Ph.D. Director Schenk Center For Informatic Sciences Wheeling Jesuit University 316 Washington Ave Wheeling, WV 26003 (304) 243-2520 bh...@wj... "I say unto you: a man must have chaos yet within him to be able to give birth to a dancing star." Thus Spake Zarathustra - Nietzsche ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/_______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users --------------------------------- The fish are biting. Get more visitors on your site using Yahoo! Search Marketing. |
From: Julian C. <jul...@ya...> - 2007-05-30 20:38:00
|
You should be able to use pympi interactively, but it needs to be built properly. You should test the non-interactive method listed by Jairo. If that works, the most likely problem is that the build is missing some flags relating to enabling newline etc. There are some previous posts in pympi-users relating to this, but sourceforge isn't letting me look at them right now.. Julian Cook Jairo Serrano <jai...@gm...> wrote: Ben, use pyMPI with mpirun or mpiexec like this: $mpiexec -n 8 pyMPI <a_python_program.py> cordially, Jairo Serrano 2007/5/30, Ben Hitt < bh...@wj...>:Hello all I have an 8 node MPICH2 cluster that the basic tests ( mpiexec -n 8 hostname, mpiexec -n 8 cpi and mpiexec -n 8 hellow) all work normally I've installed pyMPI and it runs when called as $pyMPI on all 8 machines. However when I run $mpiexec -n 8 pyMPI or mpirun -np 8 pyMPI no prompt, '$' or '>>>', is returned. When I query each node '$ssh <node> ps -e | grep py' they all return pyMPI as an active process. I've let the command sit for the better part of an hour with no response. The same thing happens irrespective of values given to -n. The use of machinefile likewise has no impact. Any ideas? Ben -- Ben A. Hitt Ph.D. Director Schenk Center For Informatic Sciences Wheeling Jesuit University 316 Washington Ave Wheeling, WV 26003 (304) 243-2520 bh...@wj... "I say unto you: a man must have chaos yet within him to be able to give birth to a dancing star." Thus Spake Zarathustra - Nietzsche ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/_______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. |
From: Jairo S. <jai...@gm...> - 2007-05-30 16:52:19
|
Ben, use pyMPI with mpirun or mpiexec like this: $mpiexec -n 8 pyMPI <a_python_program.py> cordially, Jairo Serrano 2007/5/30, Ben Hitt <bh...@wj...>: > > Hello all > I have an 8 node MPICH2 cluster that the basic tests ( mpiexec -n 8 > hostname, mpiexec -n 8 cpi and mpiexec -n 8 hellow) all work normally I've > installed pyMPI and it runs when called as $pyMPI on all 8 machines. > However when I run $mpiexec -n 8 pyMPI or mpirun -np 8 pyMPI no prompt, '$' > or '>>>', is returned. When I query each node '$ssh <node> ps -e | grep py' > they all return pyMPI as an active process. I've let the command sit for > the better part of an hour with no response. The same thing happens > irrespective of values given to -n. The use of machinefile likewise has no > impact. > Any ideas? > Ben > -- > Ben A. Hitt Ph.D. > Director > Schenk Center For Informatic Sciences > Wheeling Jesuit University > 316 Washington Ave > Wheeling, WV 26003 > (304) 243-2520 > bh...@wj... > > "I say unto you: a man must have chaos yet within him to be able to give > birth to a dancing star." > Thus Spake Zarathustra - Nietzsche > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 |
From: Ben H. <bh...@wj...> - 2007-05-30 15:44:23
|
Hello all I have an 8 node MPICH2 cluster that the basic tests ( mpiexec -n 8 hostname, mpiexec -n 8 cpi and mpiexec -n 8 hellow) all work normally I've installed pyMPI and it runs when called as $pyMPI on all 8 machines. However when I run $mpiexec -n 8 pyMPI or mpirun -np 8 pyMPI no prompt, '$' or '>>>', is returned. When I query each node '$ssh <node> ps -e | grep py' they all return pyMPI as an active process. I've let the command sit for the better part of an hour with no response. The same thing happens irrespective of values given to -n. The use of machinefile likewise has no impact. Any ideas? Ben -- Ben A. Hitt Ph.D. Director Schenk Center For Informatic Sciences Wheeling Jesuit University 316 Washington Ave Wheeling, WV 26003 (304) 243-2520 bh...@wj... "I say unto you: a man must have chaos yet within him to be able to give birth to a dancing star." Thus Spake Zarathustra - Nietzsche |
From: Julian C. <jul...@ya...> - 2007-04-10 00:36:31
|
Jairo I agree that testing using mpirun directly is a good comparision. However, connection refused usually means that the login information was never found. The only way to get more information is to using something like strace (on linux) or truss on solaris and find out exactly which login/config files were opened during initialization. All login is performed in the C mpi layer though (as far as I remember). Julian Cook ----- Original Message ---- From: Jairo Serrano <jai...@gm...> To: pym...@li... Sent: Sunday, April 8, 2007 6:23:35 PM Subject: [Pympi-users] port 544: connection refused Hi again ! When execute "mpirun -np 2 -machinefile machines helloworld" works fine !!! but if i try to run "mpirun -np 2 -machinefile machines pyMPI helloworld.py" then i have got the next error: connect to address 192.168.1.3 port 544: connection refused trying krb4 rsh... connect to address 192.168.1.3 port 544: connection refused trying normal rsh (/usr/bin/rsh) cluster2 connection refused p0_2640: p4_error: Child process exited while making connection to remote process on cluster2:0 p0_2640: (33.040355) net send: could not write to fd=4, errno=32 Im working on 2 virtual machines over VMware Workstation 5.5, each one with fedora 5, mpich-1.2.7p1 and pyMPI-2.4b. The ip configuration is for: cluster1 machine: ip: 192.168.1.2 mask: 255.255.255.0 gateway: 192.168.1.1 dns primary: 192.168.1.2 cluster2 machine: ip: 192.168.1.3 mask: 255.255.255.0 gateway: 192.168.1.1 dns primary: 192.168.1.2 What could be the problem ? compatibility between mpich and pympi ? -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. http://tv.yahoo.com/ |
From: Jairo S. <jai...@gm...> - 2007-04-08 22:23:42
|
Hi again ! When execute "mpirun -np 2 -machinefile machines helloworld" works fine !!! but if i try to run "mpirun -np 2 -machinefile machines pyMPI helloworld.py" then i have got the next error: connect to address 192.168.1.3 port 544: connection refused trying krb4 rsh... connect to address 192.168.1.3 port 544: connection refused trying normal rsh (/usr/bin/rsh) cluster2 connection refused p0_2640: p4_error: Child process exited while making connection to remote process on cluster2:0 p0_2640: (33.040355) net send: could not write to fd=4, errno=32 Im working on 2 virtual machines over VMware Workstation 5.5, each one with fedora 5, mpich-1.2.7p1 and pyMPI-2.4b. The ip configuration is for: cluster1 machine: ip: 192.168.1.2 mask: 255.255.255.0 gateway: 192.168.1.1 dns primary: 192.168.1.2 cluster2 machine: ip: 192.168.1.3 mask: 255.255.255.0 gateway: 192.168.1.1 dns primary: 192.168.1.2 What could be the problem ? compatibility between mpich and pympi ? -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 |
From: Julian C. <jul...@ya...> - 2007-04-03 09:31:16
|
Jairo I don't remember any prior discussion of using pympi with rocks, so you may be the first to try this. 1. Usually pympi is located in the same directory as the python executable is, as long as mpirun can find it there. 2. I took a quick look at the Rocks documentation. It seems that this comes under "Adding packages to compute nodes"? 3. If so, then pympi would need to be part of an RPM, since the Rocks clusters appear to be updated by distributing the RPM's out to each node 4. pympi has never been distributed as a binary, so I don't think it's ever been available as an rpm, but you could definitely find someone who knows how to create one. 5. Once you have an rpm, it needs to be placed in the contrib directory tree and named in the extend-compute.xml file as a new package. After this the distribution is rebuilt. 6. If you are already distributing the contents of the /usr/bin and python directories, you could just place pympi in /usr/bin or wherever python is located and it should work (I think). Let me know if you agree with this regards Julian Cook ----- Original Message ---- From: Jairo Serrano <jai...@gm...> To: pym...@li... Sent: Sunday, April 1, 2007 11:58:44 AM Subject: [Pympi-users] Where to install pympi in ROCKS cluster??? Hi ! I've been installed pyMPI on a non rocks cluster, but now I need to install over a Rocks Cluster, but i dont have idea where to install and which are the parameters to use with "./configure" . Perhaps pympi-users had been installed and could help me. Thanks a lot. PD: Mr. Patt Miller, finally could you install pympi with mpich2 ??? -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users ____________________________________________________________________________________ Bored stiff? Loosen up... Download and play hundreds of games for free on Yahoo! Games. http://games.yahoo.com/games/front |
From: Jairo S. <jai...@gm...> - 2007-04-01 15:58:44
|
Hi ! I've been installed pyMPI on a non rocks cluster, but now I need to install over a Rocks Cluster, but i dont have idea where to install and which are the parameters to use with "./configure" . Perhaps pympi-users had been installed and could help me. Thanks a lot. PD: Mr. Patt Miller, finally could you install pympi with mpich2 ??? -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2007 |
From: Julian C. <jul...@ya...> - 2007-03-13 13:13:06
|
Pat, I sent this to you in Jan, I'm re-sending to the list:=0A=0AHave you s= pent any time looking at MPI 2 yet? I noticed that all the P4=0Ainfrastruct= ure has been replaced by python daemons. It doesn't look=0Alike that has an= y bearing on pympi though, because pympi would be=0Alaunched by mpiexec - w= hich looks like a C program?=0A=0AI'm=0Acurrently looking at compute farm t= ype possibilities for some of the=0AGFI software, though it may end up bein= g java based, due to some of the=0Aservers being java. Also a single queue/= multiple worker approach is=0Abetter for portfolio analysis.=0A=0ACurrently= I'm using windows for everything at work right now, so have not used pympi= at all recently.=0A=0AFYi: Someone at USC created a new single queue/multi= ple worker python library:=0A=0Ahttp://www.parallelpython.com/=0A=0Aregards= =0A=0AJulian=0A=0A=0A=0A =0A_______________________________________________= _____________________________________=0ABe a PS3 game guru.=0AGet your game= face on with the latest PS3 news and previews at Yahoo! Games.=0Ahttp://vi= deogames.yahoo.com/platform?platform=3D120121 |
From: Pat M. <pat...@gm...> - 2006-12-02 01:03:17
|
I'll look at some of these MPICH issues this weekend. The "checking NumArray" is supposed to add support for Numeric/NumArray types (so that those arrays get sent with Native MPI instead of pickled -- this is a *huge* savings). I have a MacOSX computer, but it is older (PPC), so I can't quite check in the same environment. Cheers, Pat On 12/1/06, Conor Robinson <con...@gm...> wrote: > Hello pympi users, > > I've successfully install mpich2-1.0.4p1 as well as mpich-1.2.7p1 is > separate directories under usr/local/mpich<version> and added either > of the two to my path and tested both with cpi. All is well... > > Macintosh:~/desktop/mpich2-1.0.4p1/examples conor$ mpiexec -n 4 ./cpi > Process 0 of 4 is on Macintosh.local > Process 2 of 4 is on Macintosh.local > Process 1 of 4 is on Macintosh.local > Process 3 of 4 is on Macintosh.local > pi is approximately 3.1415926544231239, Error is 0.0000000008333307 > wall clock time = 0.002964 > > However, when I try to configure pympi with either mpich-1.2.7 or > mpich2 I get the same issues. My first major issue is that in python > 2.4 libpython2.4.a ( configure:4873: WARNING: If you get link errors > add a --with-libs=-L/path/to/libpython2.4.a ) does not exist nor does > a dynamic library. I looked this up at python.org and it seems to be > the case ( http://mail.python.org/pipermail/pythonmac-sig/2006-July/017970.html > ), however I see others on the mailing archive are running 2.4??? How > might I fix this, maybe without a new version of python and installing > all my other libs again? > > Also, the log file says that I'm running a i486, when I believe that > this chip is referred to as an i686 (sorry I'm new to the intel naming > culture). I have an alternate install of gcc 4.0.3 for the 64bit > intel i686 and SSE (I have given this compiler priority in my path, > but have used gcc 4.0.1 system install with the same results). I see > that there are also processor type mismatches in the log file. > (example near configure:5889 ) > > final notes: > > configure:4671: checking Numarray? > configure:4674: result: > > seems to be no result when looking for numarry, however it exist and functions? > > configure:3270:28: error: ac_nonexistent.h: No such file or directory > > Is this a problem? > > Any thoughts on the above would be greatly appreciated. > > thanks, > Conor > > Here is a copy of config.log: > > This file contains any messages produced by compilers while > running configure, to aid debugging if configure makes a mistake. > > It was created by configure, which was > generated by GNU Autoconf 2.54. Invocation command line was > > $ ./configure > > ## --------- ## > ## Platform. ## > ## --------- ## > > hostname = Macintosh.local > uname -m = i386 > uname -r = 8.8.1 > uname -s = Darwin > uname -v = Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; > root:xnu-792.13.8.obj~1/RELEASE_I386 > > /usr/bin/uname -p = i386 > /bin/uname -X = unknown > > /bin/arch = unknown > /usr/bin/arch -k = unknown > /usr/convex/getsysinfo = unknown > hostinfo = Mach kernel version: > Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; > root:xnu-792.13.8.obj~1/RELEASE_I386 > Kernel configured for up to 4 processors. > 4 processors are physically available. > 4 processors are logically available. > Processor type: i486 (Intel 80486) > Processors active: 0 1 2 3 > Primary memory available: 4.00 gigabytes > Default processor set: 54 tasks, 172 threads, 4 processors > Load average: 0.37, Mach factor: 3.62 > /bin/machine = unknown > /usr/bin/oslevel = unknown > /bin/universe = unknown > > PATH: /usr/local/mpich2-install/bin > PATH: /usr/local/gcc4.0/bin > PATH: /sw/bin > PATH: /Library/Frameworks/Python.framework/Versions/Current/bin > PATH: /bin > PATH: /sbin > PATH: /usr/bin > PATH: /usr/sbin > > > ## ----------- ## > ## Core tests. ## > ## ----------- ## > > configure:1290: checking build system type > configure:1308: result: i386-apple-darwin8.8.1 > configure:1316: checking host system type > configure:1330: result: i386-apple-darwin8.8.1 > configure:1397: checking for a BSD-compatible install > configure:1451: result: /usr/bin/install -c > configure:1462: checking whether build environment is sane > configure:1505: result: yes > configure:1538: checking for gawk > configure:1567: result: no > configure:1538: checking for mawk > configure:1567: result: no > configure:1538: checking for nawk > configure:1567: result: no > configure:1538: checking for awk > configure:1554: found /usr/bin/awk > configure:1564: result: awk > configure:1574: checking whether make sets ${MAKE} > configure:1594: result: yes > configure:1802: checking for ranlib > configure:1818: found /usr/bin/ranlib > configure:1829: result: ranlib > configure:1842: checking host overrides > configure:1877: result: no > configure:1882: checking fatal error on cancel of isend (--with-bad-cancel) > configure:1897: result: no > configure:1912: checking Assume stdin is interactive (--with-isatty) > configure:1934: result: > configure:1948: checking Append a newline to prompt (--with-prompt-nl) > configure:1970: result: > configure:1989: checking for mpcc > configure:2018: result: no > configure:2028: checking for mpxlc > configure:2057: result: no > configure:2067: checking for mpiicc > configure:2096: result: no > configure:2106: checking for mpicc > configure:2122: found /usr/local/mpich2-install/bin/mpicc > configure:2132: result: mpicc > configure:2458: checking for C compiler version > configure:2461: mpicc --version </dev/null >&5 > i686-apple-darwin8-gcc-4.0.3 (GCC) 4.0.3 > Copyright (C) 2006 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > configure:2464: $? = 0 > configure:2466: mpicc -v </dev/null >&5 > mpicc for 1.0.4p1 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: ../gcc-4.0.3/configure --prefix=/usr/local/gcc4.0 > --disable-checking --enable-languages=c,objc,c++,f95 > --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ > --build=i686-apple-darwin8 --with-arch=pentium-m --with-tune=prescott > --program-prefix= --host=i686-apple-darwin8 > --target=i686-apple-darwin8 > Thread model: posix > gcc version 4.0.3 > /usr/local/gcc4.0/libexec/gcc/i686-apple-darwin8/4.0.3/collect2 > -dynamic -arch i686 -weak_reference_mismatches non-weak -o a.out > -lcrt1.o /usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/crt2.o > -L/usr/local/mpich2-install/lib > -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3 > -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/../../.. -lpmpich > -lmpich -lgcc -lgcc_eh -lSystemStubs -lmx -lSystem > /usr/bin/ld: Undefined symbols: > _main > collect2: ld returned 1 exit status > configure:2469: $? = 1 > configure:2471: mpicc -V </dev/null >&5 > i686-apple-darwin8-gcc-4.0.3: '-V' must come at the start of the command line > configure:2474: $? = 1 > configure:2494: checking for C compiler default output > configure:2497: mpicc conftest.c >&5 > configure:2500: $? = 0 > configure:2534: result: a.out > configure:2539: checking whether the C compiler works > configure:2563: result: yes > configure:2570: checking whether we are cross compiling > configure:2572: result: yes > configure:2575: checking for suffix of executables > configure:2577: mpicc -o conftest conftest.c >&5 > configure:2580: $? = 0 > configure:2603: result: > configure:2609: checking for suffix of object files > configure:2627: mpicc -c conftest.c >&5 > configure:2630: $? = 0 > configure:2649: result: o > configure:2653: checking whether we are using the GNU C compiler > configure:2674: mpicc -c conftest.c >&5 > configure:2677: $? = 0 > configure:2680: test -s conftest.o > configure:2683: $? = 0 > configure:2695: result: yes > configure:2701: checking whether mpicc accepts -g > configure:2719: mpicc -c -g conftest.c >&5 > configure:2722: $? = 0 > configure:2725: test -s conftest.o > configure:2728: $? = 0 > configure:2738: result: yes > configure:2755: checking for mpicc option to accept ANSI C > configure:2812: mpicc -c -g -O2 conftest.c >&5 > configure:2815: $? = 0 > configure:2818: test -s conftest.o > configure:2821: $? = 0 > configure:2838: result: none needed > configure:2856: mpicc -c -g -O2 conftest.c >&5 > conftest.c:2: error: syntax error before 'me' > configure:2859: $? = 1 > configure: failed program was: > #ifndef __cplusplus > choke me > #endif > configure:2976: checking for style of include used by make > configure:3004: result: GNU > configure:3032: checking dependency style of mpicc > configure:3094: result: none > configure:3114: checking for an ANSI C-conforming const > configure:3178: mpicc -c -g -O2 conftest.c >&5 > configure:3181: $? = 0 > configure:3184: test -s conftest.o > configure:3187: $? = 0 > configure:3197: result: yes > configure:3208: checking for mpicc is really C++ > configure:3215: checking how to run the C preprocessor > configure:3241: mpicc -E conftest.c > configure:3247: $? = 0 > configure:3274: mpicc -E conftest.c > configure:3270:28: error: ac_nonexistent.h: No such file or directory > configure:3280: $? = 1 > configure: failed program was: > #line 3269 "configure" > #include "confdefs.h" > #include <ac_nonexistent.h> > configure:3317: result: mpicc -E > configure:3332: mpicc -E conftest.c > configure:3338: $? = 0 > configure:3365: mpicc -E conftest.c > configure:3361:28: error: ac_nonexistent.h: No such file or directory > configure:3371: $? = 1 > configure: failed program was: > #line 3360 "configure" > #include "confdefs.h" > #include <ac_nonexistent.h> > configure:3411: checking for egrep > configure:3421: result: grep -E > configure:3448: result: no > configure:3472: checking for sed > configure:3490: found /usr/bin/sed > configure:3502: result: /usr/bin/sed > configure:3516: checking for grep > configure:3534: found /usr/bin/grep > configure:3546: result: /usr/bin/grep > configure:3561: checking for mpiCC > configure:3577: found /usr/local/mpich2-install/bin/mpiCC > configure:3587: result: mpiCC > configure:3769: checking for C++ compiler version > configure:3772: mpiCC --version </dev/null >&5 > i686-apple-darwin8-gcc-4.0.3 (GCC) 4.0.3 > Copyright (C) 2006 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > configure:3775: $? = 0 > configure:3777: mpiCC -v </dev/null >&5 > mpicc for 1.0.4p1 > Using built-in specs. > Target: i686-apple-darwin8 > Configured with: ../gcc-4.0.3/configure --prefix=/usr/local/gcc4.0 > --disable-checking --enable-languages=c,objc,c++,f95 > --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ > --build=i686-apple-darwin8 --with-arch=pentium-m --with-tune=prescott > --program-prefix= --host=i686-apple-darwin8 > --target=i686-apple-darwin8 > Thread model: posix > gcc version 4.0.3 > /usr/local/gcc4.0/libexec/gcc/i686-apple-darwin8/4.0.3/collect2 > -dynamic -arch i686 -weak_reference_mismatches non-weak -o a.out > -lcrt1.o /usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/crt2.o > -L/usr/local/mpich2-install/lib > -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3 > -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/../../.. -lpmpich > -lmpich -lgcc -lgcc_eh -lSystemStubs -lmx -lSystem > /usr/bin/ld: Undefined symbols: > _main > collect2: ld returned 1 exit status > configure:3780: $? = 1 > configure:3782: mpiCC -V </dev/null >&5 > i686-apple-darwin8-gcc-4.0.3: '-V' must come at the start of the command line > configure:3785: $? = 1 > configure:3788: checking whether we are using the GNU C++ compiler > configure:3809: mpiCC -c conftest.cc >&5 > configure:3812: $? = 0 > configure:3815: test -s conftest.o > configure:3818: $? = 0 > configure:3830: result: yes > configure:3836: checking whether mpiCC accepts -g > configure:3854: mpiCC -c -g conftest.cc >&5 > configure:3857: $? = 0 > configure:3860: test -s conftest.o > configure:3863: $? = 0 > configure:3873: result: yes > configure:3913: mpiCC -c -g -O2 conftest.cc >&5 > /var/tmp//ccl0oIip.s:42:indirect jmp without `*' > configure:3916: $? = 0 > configure:3919: test -s conftest.o > configure:3922: $? = 0 > configure:3944: mpiCC -c -g -O2 conftest.cc >&5 > configure: In function 'int main()': > configure:3936: error: 'exit' was not declared in this scope > configure:3947: $? = 1 > configure: failed program was: > #line 3931 "configure" > #include "confdefs.h" > > int > main () > { > exit (42); > ; > return 0; > } > configure:3913: mpiCC -c -g -O2 conftest.cc >&5 > /var/tmp//ccdCGafB.s:42:indirect jmp without `*' > configure:3916: $? = 0 > configure:3919: test -s conftest.o > configure:3922: $? = 0 > configure:3944: mpiCC -c -g -O2 conftest.cc >&5 > /var/tmp//ccQqF9xG.s:42:indirect jmp without `*' > configure:3947: $? = 0 > configure:3950: test -s conftest.o > configure:3953: $? = 0 > configure:3977: checking dependency style of mpiCC > configure:4039: result: none > configure:4071: checking for mpicc > configure:4089: found /usr/local/mpich2-install/bin/mpicc > configure:4101: result: /usr/local/mpich2-install/bin/mpicc > configure:4128: checking for mpiCC > configure:4146: found /usr/local/mpich2-install/bin/mpiCC > configure:4158: result: /usr/local/mpich2-install/bin/mpiCC > configure:4181: checking if /usr/local/mpich2-install/bin/mpicc -E -w > is a valid CPP > configure:4191: result: yes > configure:4204: checking how to run the C preprocessor > configure:4306: result: /usr/local/mpich2-install/bin/mpicc -E -w > configure:4321: /usr/local/mpich2-install/bin/mpicc -E -w conftest.c > configure:4327: $? = 0 > configure:4354: /usr/local/mpich2-install/bin/mpicc -E -w conftest.c > configure:4350:28: error: ac_nonexistent.h: No such file or directory > configure:4360: $? = 1 > configure: failed program was: > #line 4349 "configure" > #include "confdefs.h" > #include <ac_nonexistent.h> > configure:4409: checking for --with-python > configure:4422: result: no > configure:4449: checking executable > /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 > configure:4451: result: yes > configure:4500: checking for Python > configure:4502: result: > /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 > configure:4507: checking for MPIRun.exe > configure:4540: result: no > configure:4546: checking for mpirun > configure:4564: found /usr/local/mpich2-install/bin/mpirun > configure:4576: result: /usr/local/mpich2-install/bin/mpirun > configure:4585: checking for poe > configure:4618: result: no > configure:4623: checking Python version 2.2 or higher > configure:4626: result: yes > configure:4633: checking distutils? > configure:4636: result: yes > configure:4644: checking distutils works > configure:4647: result: yes > configure:4658: checking Numeric? > configure:4661: result: yes > configure:4671: checking Numarray? > configure:4674: result: > configure:4690: checking Python version string > configure:4693: result: 2.4 > configure:4703: checking install prefix for > /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 > configure:4710: result: /Library/Frameworks/Python.framework/Versions/2.4 > configure:4715: checking Prefix exists... > configure:4718: result: yes > configure:4748: checking for python include location > configure:4752: result: > /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > configure:4755: checking that include directory exists > configure:4758: result: yes > configure:4768: checking for python library location > configure:4772: result: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > configure:4775: checking that lib directory is accessable > configure:4778: result: yes > configure:4794: checking Python library > configure:4797: result: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4 > configure:4803: checking site.py > configure:4806: result: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site.py > configure:4821: checking site-packages > configure:4824: result: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > configure:4838: checking for python lib/config location > configure:4842: result: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > configure:4850: checking that lib/config directory is accessable > configure:4853: result: yes > configure:4864: checking libpython2.4.a is there > configure:4871: result: not there > configure:4873: WARNING: If you get link errors add a > --with-libs=-L/path/to/libpython2.4.a > configure:4882: checking configuration Makefile is there > configure:4885: result: yes > configure:4896: checking module configuration table is there > configure:4899: result: yes > configure:4911: checking original Python there > configure:4914: result: yes > configure:4925: checking for --with-includes > configure:4946: result: no > configure:4953: checking for compiler based include directory > configure:4961: result: no > configure:4965: checking MPI_COMPILE_FLAGS > configure:4968: result: no > configure:4975: checking MPI_LD_FLAGS > configure:4978: result: no > configure:4986: checking for ANSI C header files > configure:5000: /usr/local/mpich2-install/bin/mpicc -E -w > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c > configure:5006: $? = 0 > configure:5091: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > >&5 > configure: In function 'main': > configure:5084: warning: incompatible implicit declaration of built-in > function 'exit' > /var/tmp//cc1oc9X5.s:144:indirect jmp without `*' > /var/tmp//cc1oc9X5.s:159:indirect jmp without `*' > /var/tmp//cc1oc9X5.s:174:indirect jmp without `*' > configure:5094: $? = 0 > configure:5096: ./conftest > configure:5099: $? = 0 > configure:5113: result: yes > configure:5137: checking for sys/types.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for sys/stat.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for stdlib.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for string.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for memory.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for strings.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for inttypes.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for stdint.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5137: checking for unistd.h > configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5153: $? = 0 > configure:5156: test -s conftest.o > configure:5159: $? = 0 > configure:5169: result: yes > configure:5191: checking mpi.h usability > configure:5200: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5203: $? = 0 > configure:5206: test -s conftest.o > configure:5209: $? = 0 > configure:5218: result: yes > configure:5222: checking mpi.h presence > configure:5229: /usr/local/mpich2-install/bin/mpicc -E -w > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c > configure:5235: $? = 0 > configure:5253: result: yes > configure:5271: checking for mpi.h > configure:5278: result: yes > configure:5305: checking Python.h usability > configure:5314: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:5317: $? = 0 > configure:5320: test -s conftest.o > configure:5323: $? = 0 > configure:5332: result: yes > configure:5336: checking Python.h presence > configure:5343: /usr/local/mpich2-install/bin/mpicc -E -w > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c > configure:5349: $? = 0 > configure:5367: result: yes > configure:5385: checking for Python.h > configure:5392: result: yes > configure:5418: checking Python CC > configure:5421: result: gcc > configure:5430: checking Python CFLAGS > configure:5433: result: -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double > -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g > configure:5442: checking Python INCLUDEPY > configure:5445: result: > /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > configure:5454: checking Python OPT > configure:5457: result: -DNDEBUG -g > configure:5466: checking Python LDFLAGS > configure:5469: result: -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -g > configure:5478: checking Python LINKFORSHARED > configure:5481: result: -u _PyMac_Error Python.framework/Versions/2.4/Python > configure:5492: checking Python LDSHARED > configure:5495: result: gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -g -bundle -undefined dynamic_lookup > configure:5504: checking Python BLDSHARED > configure:5507: result: gcc -arch ppc -arch i386 -isysroot > /Developer/SDKs/MacOSX10.4u.sdk -g -bundle -undefined dynamic_lookup > configure:5517: checking Python LOCALMODLIBS > configure:5520: result: > configure:5529: checking Python BASEMODLIBS > configure:5532: result: > configure:5541: checking Python LIBS > configure:5544: result: -ldl > configure:5553: checking Python LDLAST > configure:5556: result: > configure:5560: checking Python library options > configure:5563: result: > -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl > configure:5592: checking for --with-debug > configure:5612: result: no > configure:5626: checking python.exp file > configure:5635: result: no > configure:5639: checking sysconf(_SC_NPROCESSORS_CONF) > configure:5660: gcc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure: In function 'main': > configure:5651: error: '_SC_NPROCESSORS_CONF' undeclared (first use in > this function) > configure:5651: error: (Each undeclared identifier is reported only once > configure:5651: error: for each function it appears in.) > configure:5651: warning: incompatible implicit declaration of built-in > function 'exit' > configure:5663: $? = 1 > configure: failed program was: > #line 5643 "configure" > #include "confdefs.h" > > #include <unistd.h> > > int > main () > { > > int np = sysconf(_SC_NPROCESSORS_CONF); exit(0); > > ; > return 0; > } > configure:5687: result: no > configure:5691: checking for ANSI C header files > configure:5818: result: yes > configure:5831: checking local processor count for testing > configure:5851: result: 1 > configure:5854: checking for --with-libs > configure:5868: result: no > configure:5874: checking for pow in -lm > configure:5901: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -lm > -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl >&5 > configure:5889: warning: conflicting types for built-in function 'pow' > configure:5889: warning: conflicting types for built-in function 'pow' > /var/tmp//cc9hXCTQ.s:36:indirect jmp without `*' > /usr/bin/ld: for architecture i686 > /usr/bin/ld: Undefined symbols: > _PyMac_Error > collect2: ld returned 1 exit status > /usr/bin/ld: for architecture ppc > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: Undefined symbols: > _PyMac_Error > collect2: ld returned 1 exit status > lipo: can't open input file: /var/tmp//ccPj1HBD.out (No such file or directory) > configure:5904: $? = 1 > configure: failed program was: > #line 5881 "configure" > #include "confdefs.h" > > /* Override any gcc2 internal prototype to avoid an error. */ > #ifdef __cplusplus > extern "C" > #endif > /* We use char because int might match the return type of a gcc2 > builtin and then its argument prototype would still apply. */ > char pow (); > int > main () > { > pow (); > ; > return 0; > } > configure:5921: result: no > configure:5936: checking for PyOS_StdioReadline > configure:5973: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl >&5 > /usr/bin/ld: for architecture ppc > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _PyOS_StdioReadline > collect2: ld returned 1 exit status > /usr/bin/ld: for architecture i686 > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _PyOS_StdioReadline > collect2: ld returned 1 exit status > lipo: can't open input file: /var/tmp//cc90vZio.out (No such file or directory) > configure:5976: $? = 1 > configure: failed program was: > #line 5941 "configure" > #include "confdefs.h" > /* System header to define __stub macros and hopefully few prototypes, > which can conflict with char PyOS_StdioReadline (); below. */ > #include <assert.h> > /* Override any gcc2 internal prototype to avoid an error. */ > #ifdef __cplusplus > extern "C" > #endif > /* We use char because int might match the return type of a gcc2 > builtin and then its argument prototype would still apply. */ > char PyOS_StdioReadline (); > char (*f) (); > > int > main () > { > /* The GNU C library defines this for functions which it implements > to always fail with ENOSYS. Some functions are actually named > something starting with __ and the normal name is an alias. */ > #if defined (__stub_PyOS_StdioReadline) || defined (__stub___PyOS_StdioReadline) > choke me > #else > f = PyOS_StdioReadline; > #endif > > ; > return 0; > } > configure:5992: result: no > configure:6007: checking for setlinebuf > configure:6044: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl >&5 > /usr/bin/ld: for architecture i686 > /usr/bin/ld: Undefined symbols: > _PyMac_Error > collect2: ld returned 1 exit status > /usr/bin/ld: for architecture ppc > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: Undefined symbols: > _PyMac_Error > collect2: ld returned 1 exit status > lipo: can't open input file: /var/tmp//ccKdik5d.out (No such file or directory) > configure:6047: $? = 1 > configure: failed program was: > #line 6012 "configure" > #include "confdefs.h" > /* System header to define __stub macros and hopefully few prototypes, > which can conflict with char setlinebuf (); below. */ > #include <assert.h> > /* Override any gcc2 internal prototype to avoid an error. */ > #ifdef __cplusplus > extern "C" > #endif > /* We use char because int might match the return type of a gcc2 > builtin and then its argument prototype would still apply. */ > char setlinebuf (); > char (*f) (); > > int > main () > { > /* The GNU C library defines this for functions which it implements > to always fail with ENOSYS. Some functions are actually named > something starting with __ and the normal name is an alias. */ > #if defined (__stub_setlinebuf) || defined (__stub___setlinebuf) > choke me > #else > f = setlinebuf; > #endif > > ; > return 0; > } > configure:6063: result: no > configure:6088: checking sys/param.h usability > configure:6097: /usr/local/mpich2-install/bin/mpicc -c -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c >&5 > configure:6100: $? = 0 > configure:6103: test -s conftest.o > configure:6106: $? = 0 > configure:6115: result: yes > configure:6119: checking sys/param.h presence > configure:6126: /usr/local/mpich2-install/bin/mpicc -E -w > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c > configure:6132: $? = 0 > configure:6150: result: yes > configure:6168: checking for sys/param.h > configure:6175: result: yes > configure:6195: checking Python links as is > configure:6217: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl -lm >&5 > /var/tmp//ccPiQeM5.s:41:indirect jmp without `*' > /usr/bin/ld: for architecture i686 > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _Py_Main > collect2: ld returned 1 exit status > /usr/bin/ld: for architecture ppc > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _Py_Main > collect2: ld returned 1 exit status > lipo: can't open input file: /var/tmp//ccjMIK6h.out (No such file or directory) > configure:6220: $? = 1 > configure: failed program was: > #line 6197 "configure" > #include "confdefs.h" > #ifdef __cplusplus > extern "C" { > #endif > extern int Py_Main(int,char**); > #ifdef __cplusplus > } > #endif > > int > main () > { > Py_Main(0,0) > ; > return 0; > } > configure:6236: result: no > configure:6239: checking for -pthread > configure:6263: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > -pthread conftest.c > -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl -lm >&5 > powerpc-apple-darwin8-gcc-4.0.3: unrecognized option '-pthread' > i686-apple-darwin8-gcc-4.0.3: unrecognized option '-pthread' > /var/tmp//ccLaxAiT.s:41:indirect jmp without `*' > /usr/bin/ld: for architecture ppc > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a > archive's cputype (7, architecture i386) does not match cputype (18) > for specified -arch flag: ppc (can't load from it) > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _Py_Main > collect2: ld returned 1 exit status > /usr/bin/ld: for architecture i686 > /usr/bin/ld: Undefined symbols: > _PyMac_Error > _Py_Main > collect2: ld returned 1 exit status > lipo: can't open input file: /var/tmp//ccQtVm5I.out (No such file or directory) > configure:6266: $? = 1 > configure: failed program was: > #line 6243 "configure" > #include "confdefs.h" > #ifdef __cplusplus > extern "C" { > #endif > extern int Py_Main(int,char**); > #ifdef __cplusplus > } > #endif > > int > main () > { > Py_Main(0,0) > ; > return 0; > } > configure:6282: result: no > configure:6286: checking for gcc libraries > configure:6289: result: Found > /usr/local/lib/gcc/i386-apple-darwin8.8.1/4.2.0/libgcc.a > configure:6292: checking _eprintf bug workaround > configure:6314: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 > -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 > conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config > -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u > _PyMac_Error -ldl -lm > /usr/local/lib/gcc/i386-apple-darwin8.8.1/4.2.0/libgcc.a -lm >&5 > /var/tmp//ccm6o9nT.s:41:indirect jmp without `*' > //usrubin/ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > |
From: Conor R. <con...@gm...> - 2006-12-02 00:27:40
|
Hello pympi users, I've successfully install mpich2-1.0.4p1 as well as mpich-1.2.7p1 is separate directories under usr/local/mpich<version> and added either of the two to my path and tested both with cpi. All is well... Macintosh:~/desktop/mpich2-1.0.4p1/examples conor$ mpiexec -n 4 ./cpi Process 0 of 4 is on Macintosh.local Process 2 of 4 is on Macintosh.local Process 1 of 4 is on Macintosh.local Process 3 of 4 is on Macintosh.local pi is approximately 3.1415926544231239, Error is 0.0000000008333307 wall clock time = 0.002964 However, when I try to configure pympi with either mpich-1.2.7 or mpich2 I get the same issues. My first major issue is that in python 2.4 libpython2.4.a ( configure:4873: WARNING: If you get link errors add a --with-libs=-L/path/to/libpython2.4.a ) does not exist nor does a dynamic library. I looked this up at python.org and it seems to be the case ( http://mail.python.org/pipermail/pythonmac-sig/2006-July/017970.html ), however I see others on the mailing archive are running 2.4??? How might I fix this, maybe without a new version of python and installing all my other libs again? Also, the log file says that I'm running a i486, when I believe that this chip is referred to as an i686 (sorry I'm new to the intel naming culture). I have an alternate install of gcc 4.0.3 for the 64bit intel i686 and SSE (I have given this compiler priority in my path, but have used gcc 4.0.1 system install with the same results). I see that there are also processor type mismatches in the log file. (example near configure:5889 ) final notes: configure:4671: checking Numarray? configure:4674: result: seems to be no result when looking for numarry, however it exist and functions? configure:3270:28: error: ac_nonexistent.h: No such file or directory Is this a problem? Any thoughts on the above would be greatly appreciated. thanks, Conor Here is a copy of config.log: This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by configure, which was generated by GNU Autoconf 2.54. Invocation command line was $ ./configure ## --------- ## ## Platform. ## ## --------- ## hostname = Macintosh.local uname -m = i386 uname -r = 8.8.1 uname -s = Darwin uname -v = Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 /usr/bin/uname -p = i386 /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown hostinfo = Mach kernel version: Darwin Kernel Version 8.8.1: Mon Sep 25 19:42:00 PDT 2006; root:xnu-792.13.8.obj~1/RELEASE_I386 Kernel configured for up to 4 processors. 4 processors are physically available. 4 processors are logically available. Processor type: i486 (Intel 80486) Processors active: 0 1 2 3 Primary memory available: 4.00 gigabytes Default processor set: 54 tasks, 172 threads, 4 processors Load average: 0.37, Mach factor: 3.62 /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /usr/local/mpich2-install/bin PATH: /usr/local/gcc4.0/bin PATH: /sw/bin PATH: /Library/Frameworks/Python.framework/Versions/Current/bin PATH: /bin PATH: /sbin PATH: /usr/bin PATH: /usr/sbin ## ----------- ## ## Core tests. ## ## ----------- ## configure:1290: checking build system type configure:1308: result: i386-apple-darwin8.8.1 configure:1316: checking host system type configure:1330: result: i386-apple-darwin8.8.1 configure:1397: checking for a BSD-compatible install configure:1451: result: /usr/bin/install -c configure:1462: checking whether build environment is sane configure:1505: result: yes configure:1538: checking for gawk configure:1567: result: no configure:1538: checking for mawk configure:1567: result: no configure:1538: checking for nawk configure:1567: result: no configure:1538: checking for awk configure:1554: found /usr/bin/awk configure:1564: result: awk configure:1574: checking whether make sets ${MAKE} configure:1594: result: yes configure:1802: checking for ranlib configure:1818: found /usr/bin/ranlib configure:1829: result: ranlib configure:1842: checking host overrides configure:1877: result: no configure:1882: checking fatal error on cancel of isend (--with-bad-cancel) configure:1897: result: no configure:1912: checking Assume stdin is interactive (--with-isatty) configure:1934: result: configure:1948: checking Append a newline to prompt (--with-prompt-nl) configure:1970: result: configure:1989: checking for mpcc configure:2018: result: no configure:2028: checking for mpxlc configure:2057: result: no configure:2067: checking for mpiicc configure:2096: result: no configure:2106: checking for mpicc configure:2122: found /usr/local/mpich2-install/bin/mpicc configure:2132: result: mpicc configure:2458: checking for C compiler version configure:2461: mpicc --version </dev/null >&5 i686-apple-darwin8-gcc-4.0.3 (GCC) 4.0.3 Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:2464: $? = 0 configure:2466: mpicc -v </dev/null >&5 mpicc for 1.0.4p1 Using built-in specs. Target: i686-apple-darwin8 Configured with: ../gcc-4.0.3/configure --prefix=/usr/local/gcc4.0 --disable-checking --enable-languages=c,objc,c++,f95 --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --build=i686-apple-darwin8 --with-arch=pentium-m --with-tune=prescott --program-prefix= --host=i686-apple-darwin8 --target=i686-apple-darwin8 Thread model: posix gcc version 4.0.3 /usr/local/gcc4.0/libexec/gcc/i686-apple-darwin8/4.0.3/collect2 -dynamic -arch i686 -weak_reference_mismatches non-weak -o a.out -lcrt1.o /usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/crt2.o -L/usr/local/mpich2-install/lib -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3 -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/../../.. -lpmpich -lmpich -lgcc -lgcc_eh -lSystemStubs -lmx -lSystem /usr/bin/ld: Undefined symbols: _main collect2: ld returned 1 exit status configure:2469: $? = 1 configure:2471: mpicc -V </dev/null >&5 i686-apple-darwin8-gcc-4.0.3: '-V' must come at the start of the command line configure:2474: $? = 1 configure:2494: checking for C compiler default output configure:2497: mpicc conftest.c >&5 configure:2500: $? = 0 configure:2534: result: a.out configure:2539: checking whether the C compiler works configure:2563: result: yes configure:2570: checking whether we are cross compiling configure:2572: result: yes configure:2575: checking for suffix of executables configure:2577: mpicc -o conftest conftest.c >&5 configure:2580: $? = 0 configure:2603: result: configure:2609: checking for suffix of object files configure:2627: mpicc -c conftest.c >&5 configure:2630: $? = 0 configure:2649: result: o configure:2653: checking whether we are using the GNU C compiler configure:2674: mpicc -c conftest.c >&5 configure:2677: $? = 0 configure:2680: test -s conftest.o configure:2683: $? = 0 configure:2695: result: yes configure:2701: checking whether mpicc accepts -g configure:2719: mpicc -c -g conftest.c >&5 configure:2722: $? = 0 configure:2725: test -s conftest.o configure:2728: $? = 0 configure:2738: result: yes configure:2755: checking for mpicc option to accept ANSI C configure:2812: mpicc -c -g -O2 conftest.c >&5 configure:2815: $? = 0 configure:2818: test -s conftest.o configure:2821: $? = 0 configure:2838: result: none needed configure:2856: mpicc -c -g -O2 conftest.c >&5 conftest.c:2: error: syntax error before 'me' configure:2859: $? = 1 configure: failed program was: #ifndef __cplusplus choke me #endif configure:2976: checking for style of include used by make configure:3004: result: GNU configure:3032: checking dependency style of mpicc configure:3094: result: none configure:3114: checking for an ANSI C-conforming const configure:3178: mpicc -c -g -O2 conftest.c >&5 configure:3181: $? = 0 configure:3184: test -s conftest.o configure:3187: $? = 0 configure:3197: result: yes configure:3208: checking for mpicc is really C++ configure:3215: checking how to run the C preprocessor configure:3241: mpicc -E conftest.c configure:3247: $? = 0 configure:3274: mpicc -E conftest.c configure:3270:28: error: ac_nonexistent.h: No such file or directory configure:3280: $? = 1 configure: failed program was: #line 3269 "configure" #include "confdefs.h" #include <ac_nonexistent.h> configure:3317: result: mpicc -E configure:3332: mpicc -E conftest.c configure:3338: $? = 0 configure:3365: mpicc -E conftest.c configure:3361:28: error: ac_nonexistent.h: No such file or directory configure:3371: $? = 1 configure: failed program was: #line 3360 "configure" #include "confdefs.h" #include <ac_nonexistent.h> configure:3411: checking for egrep configure:3421: result: grep -E configure:3448: result: no configure:3472: checking for sed configure:3490: found /usr/bin/sed configure:3502: result: /usr/bin/sed configure:3516: checking for grep configure:3534: found /usr/bin/grep configure:3546: result: /usr/bin/grep configure:3561: checking for mpiCC configure:3577: found /usr/local/mpich2-install/bin/mpiCC configure:3587: result: mpiCC configure:3769: checking for C++ compiler version configure:3772: mpiCC --version </dev/null >&5 i686-apple-darwin8-gcc-4.0.3 (GCC) 4.0.3 Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:3775: $? = 0 configure:3777: mpiCC -v </dev/null >&5 mpicc for 1.0.4p1 Using built-in specs. Target: i686-apple-darwin8 Configured with: ../gcc-4.0.3/configure --prefix=/usr/local/gcc4.0 --disable-checking --enable-languages=c,objc,c++,f95 --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --build=i686-apple-darwin8 --with-arch=pentium-m --with-tune=prescott --program-prefix= --host=i686-apple-darwin8 --target=i686-apple-darwin8 Thread model: posix gcc version 4.0.3 /usr/local/gcc4.0/libexec/gcc/i686-apple-darwin8/4.0.3/collect2 -dynamic -arch i686 -weak_reference_mismatches non-weak -o a.out -lcrt1.o /usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/crt2.o -L/usr/local/mpich2-install/lib -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3 -L/usr/local/gcc4.0/lib/gcc/i686-apple-darwin8/4.0.3/../../.. -lpmpich -lmpich -lgcc -lgcc_eh -lSystemStubs -lmx -lSystem /usr/bin/ld: Undefined symbols: _main collect2: ld returned 1 exit status configure:3780: $? = 1 configure:3782: mpiCC -V </dev/null >&5 i686-apple-darwin8-gcc-4.0.3: '-V' must come at the start of the command line configure:3785: $? = 1 configure:3788: checking whether we are using the GNU C++ compiler configure:3809: mpiCC -c conftest.cc >&5 configure:3812: $? = 0 configure:3815: test -s conftest.o configure:3818: $? = 0 configure:3830: result: yes configure:3836: checking whether mpiCC accepts -g configure:3854: mpiCC -c -g conftest.cc >&5 configure:3857: $? = 0 configure:3860: test -s conftest.o configure:3863: $? = 0 configure:3873: result: yes configure:3913: mpiCC -c -g -O2 conftest.cc >&5 /var/tmp//ccl0oIip.s:42:indirect jmp without `*' configure:3916: $? = 0 configure:3919: test -s conftest.o configure:3922: $? = 0 configure:3944: mpiCC -c -g -O2 conftest.cc >&5 configure: In function 'int main()': configure:3936: error: 'exit' was not declared in this scope configure:3947: $? = 1 configure: failed program was: #line 3931 "configure" #include "confdefs.h" int main () { exit (42); ; return 0; } configure:3913: mpiCC -c -g -O2 conftest.cc >&5 /var/tmp//ccdCGafB.s:42:indirect jmp without `*' configure:3916: $? = 0 configure:3919: test -s conftest.o configure:3922: $? = 0 configure:3944: mpiCC -c -g -O2 conftest.cc >&5 /var/tmp//ccQqF9xG.s:42:indirect jmp without `*' configure:3947: $? = 0 configure:3950: test -s conftest.o configure:3953: $? = 0 configure:3977: checking dependency style of mpiCC configure:4039: result: none configure:4071: checking for mpicc configure:4089: found /usr/local/mpich2-install/bin/mpicc configure:4101: result: /usr/local/mpich2-install/bin/mpicc configure:4128: checking for mpiCC configure:4146: found /usr/local/mpich2-install/bin/mpiCC configure:4158: result: /usr/local/mpich2-install/bin/mpiCC configure:4181: checking if /usr/local/mpich2-install/bin/mpicc -E -w is a valid CPP configure:4191: result: yes configure:4204: checking how to run the C preprocessor configure:4306: result: /usr/local/mpich2-install/bin/mpicc -E -w configure:4321: /usr/local/mpich2-install/bin/mpicc -E -w conftest.c configure:4327: $? = 0 configure:4354: /usr/local/mpich2-install/bin/mpicc -E -w conftest.c configure:4350:28: error: ac_nonexistent.h: No such file or directory configure:4360: $? = 1 configure: failed program was: #line 4349 "configure" #include "confdefs.h" #include <ac_nonexistent.h> configure:4409: checking for --with-python configure:4422: result: no configure:4449: checking executable /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 configure:4451: result: yes configure:4500: checking for Python configure:4502: result: /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 configure:4507: checking for MPIRun.exe configure:4540: result: no configure:4546: checking for mpirun configure:4564: found /usr/local/mpich2-install/bin/mpirun configure:4576: result: /usr/local/mpich2-install/bin/mpirun configure:4585: checking for poe configure:4618: result: no configure:4623: checking Python version 2.2 or higher configure:4626: result: yes configure:4633: checking distutils? configure:4636: result: yes configure:4644: checking distutils works configure:4647: result: yes configure:4658: checking Numeric? configure:4661: result: yes configure:4671: checking Numarray? configure:4674: result: configure:4690: checking Python version string configure:4693: result: 2.4 configure:4703: checking install prefix for /Library/Frameworks/Python.framework/Versions/Current/bin/pythonw2.4 configure:4710: result: /Library/Frameworks/Python.framework/Versions/2.4 configure:4715: checking Prefix exists... configure:4718: result: yes configure:4748: checking for python include location configure:4752: result: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 configure:4755: checking that include directory exists configure:4758: result: yes configure:4768: checking for python library location configure:4772: result: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages configure:4775: checking that lib directory is accessable configure:4778: result: yes configure:4794: checking Python library configure:4797: result: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4 configure:4803: checking site.py configure:4806: result: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site.py configure:4821: checking site-packages configure:4824: result: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages configure:4838: checking for python lib/config location configure:4842: result: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config configure:4850: checking that lib/config directory is accessable configure:4853: result: yes configure:4864: checking libpython2.4.a is there configure:4871: result: not there configure:4873: WARNING: If you get link errors add a --with-libs=-L/path/to/libpython2.4.a configure:4882: checking configuration Makefile is there configure:4885: result: yes configure:4896: checking module configuration table is there configure:4899: result: yes configure:4911: checking original Python there configure:4914: result: yes configure:4925: checking for --with-includes configure:4946: result: no configure:4953: checking for compiler based include directory configure:4961: result: no configure:4965: checking MPI_COMPILE_FLAGS configure:4968: result: no configure:4975: checking MPI_LD_FLAGS configure:4978: result: no configure:4986: checking for ANSI C header files configure:5000: /usr/local/mpich2-install/bin/mpicc -E -w -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c configure:5006: $? = 0 configure:5091: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config >&5 configure: In function 'main': configure:5084: warning: incompatible implicit declaration of built-in function 'exit' /var/tmp//cc1oc9X5.s:144:indirect jmp without `*' /var/tmp//cc1oc9X5.s:159:indirect jmp without `*' /var/tmp//cc1oc9X5.s:174:indirect jmp without `*' configure:5094: $? = 0 configure:5096: ./conftest configure:5099: $? = 0 configure:5113: result: yes configure:5137: checking for sys/types.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for sys/stat.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for stdlib.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for string.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for memory.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for strings.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for inttypes.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for stdint.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5137: checking for unistd.h configure:5150: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5153: $? = 0 configure:5156: test -s conftest.o configure:5159: $? = 0 configure:5169: result: yes configure:5191: checking mpi.h usability configure:5200: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5203: $? = 0 configure:5206: test -s conftest.o configure:5209: $? = 0 configure:5218: result: yes configure:5222: checking mpi.h presence configure:5229: /usr/local/mpich2-install/bin/mpicc -E -w -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c configure:5235: $? = 0 configure:5253: result: yes configure:5271: checking for mpi.h configure:5278: result: yes configure:5305: checking Python.h usability configure:5314: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:5317: $? = 0 configure:5320: test -s conftest.o configure:5323: $? = 0 configure:5332: result: yes configure:5336: checking Python.h presence configure:5343: /usr/local/mpich2-install/bin/mpicc -E -w -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c configure:5349: $? = 0 configure:5367: result: yes configure:5385: checking for Python.h configure:5392: result: yes configure:5418: checking Python CC configure:5421: result: gcc configure:5430: checking Python CFLAGS configure:5433: result: -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g configure:5442: checking Python INCLUDEPY configure:5445: result: /Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 configure:5454: checking Python OPT configure:5457: result: -DNDEBUG -g configure:5466: checking Python LDFLAGS configure:5469: result: -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g configure:5478: checking Python LINKFORSHARED configure:5481: result: -u _PyMac_Error Python.framework/Versions/2.4/Python configure:5492: checking Python LDSHARED configure:5495: result: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -bundle -undefined dynamic_lookup configure:5504: checking Python BLDSHARED configure:5507: result: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -bundle -undefined dynamic_lookup configure:5517: checking Python LOCALMODLIBS configure:5520: result: configure:5529: checking Python BASEMODLIBS configure:5532: result: configure:5541: checking Python LIBS configure:5544: result: -ldl configure:5553: checking Python LDLAST configure:5556: result: configure:5560: checking Python library options configure:5563: result: -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl configure:5592: checking for --with-debug configure:5612: result: no configure:5626: checking python.exp file configure:5635: result: no configure:5639: checking sysconf(_SC_NPROCESSORS_CONF) configure:5660: gcc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure: In function 'main': configure:5651: error: '_SC_NPROCESSORS_CONF' undeclared (first use in this function) configure:5651: error: (Each undeclared identifier is reported only once configure:5651: error: for each function it appears in.) configure:5651: warning: incompatible implicit declaration of built-in function 'exit' configure:5663: $? = 1 configure: failed program was: #line 5643 "configure" #include "confdefs.h" #include <unistd.h> int main () { int np = sysconf(_SC_NPROCESSORS_CONF); exit(0); ; return 0; } configure:5687: result: no configure:5691: checking for ANSI C header files configure:5818: result: yes configure:5831: checking local processor count for testing configure:5851: result: 1 configure:5854: checking for --with-libs configure:5868: result: no configure:5874: checking for pow in -lm configure:5901: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -lm -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl >&5 configure:5889: warning: conflicting types for built-in function 'pow' configure:5889: warning: conflicting types for built-in function 'pow' /var/tmp//cc9hXCTQ.s:36:indirect jmp without `*' /usr/bin/ld: for architecture i686 /usr/bin/ld: Undefined symbols: _PyMac_Error collect2: ld returned 1 exit status /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: Undefined symbols: _PyMac_Error collect2: ld returned 1 exit status lipo: can't open input file: /var/tmp//ccPj1HBD.out (No such file or directory) configure:5904: $? = 1 configure: failed program was: #line 5881 "configure" #include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char pow (); int main () { pow (); ; return 0; } configure:5921: result: no configure:5936: checking for PyOS_StdioReadline configure:5973: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl >&5 /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: Undefined symbols: _PyMac_Error _PyOS_StdioReadline collect2: ld returned 1 exit status /usr/bin/ld: for architecture i686 /usr/bin/ld: Undefined symbols: _PyMac_Error _PyOS_StdioReadline collect2: ld returned 1 exit status lipo: can't open input file: /var/tmp//cc90vZio.out (No such file or directory) configure:5976: $? = 1 configure: failed program was: #line 5941 "configure" #include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, which can conflict with char PyOS_StdioReadline (); below. */ #include <assert.h> /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char PyOS_StdioReadline (); char (*f) (); int main () { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_PyOS_StdioReadline) || defined (__stub___PyOS_StdioReadline) choke me #else f = PyOS_StdioReadline; #endif ; return 0; } configure:5992: result: no configure:6007: checking for setlinebuf configure:6044: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl >&5 /usr/bin/ld: for architecture i686 /usr/bin/ld: Undefined symbols: _PyMac_Error collect2: ld returned 1 exit status /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: Undefined symbols: _PyMac_Error collect2: ld returned 1 exit status lipo: can't open input file: /var/tmp//ccKdik5d.out (No such file or directory) configure:6047: $? = 1 configure: failed program was: #line 6012 "configure" #include "confdefs.h" /* System header to define __stub macros and hopefully few prototypes, which can conflict with char setlinebuf (); below. */ #include <assert.h> /* Override any gcc2 internal prototype to avoid an error. */ #ifdef __cplusplus extern "C" #endif /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char setlinebuf (); char (*f) (); int main () { /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined (__stub_setlinebuf) || defined (__stub___setlinebuf) choke me #else f = setlinebuf; #endif ; return 0; } configure:6063: result: no configure:6088: checking sys/param.h usability configure:6097: /usr/local/mpich2-install/bin/mpicc -c -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c >&5 configure:6100: $? = 0 configure:6103: test -s conftest.o configure:6106: $? = 0 configure:6115: result: yes configure:6119: checking sys/param.h presence configure:6126: /usr/local/mpich2-install/bin/mpicc -E -w -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c configure:6132: $? = 0 configure:6150: result: yes configure:6168: checking for sys/param.h configure:6175: result: yes configure:6195: checking Python links as is configure:6217: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl -lm >&5 /var/tmp//ccPiQeM5.s:41:indirect jmp without `*' /usr/bin/ld: for architecture i686 /usr/bin/ld: Undefined symbols: _PyMac_Error _Py_Main collect2: ld returned 1 exit status /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: Undefined symbols: _PyMac_Error _Py_Main collect2: ld returned 1 exit status lipo: can't open input file: /var/tmp//ccjMIK6h.out (No such file or directory) configure:6220: $? = 1 configure: failed program was: #line 6197 "configure" #include "confdefs.h" #ifdef __cplusplus extern "C" { #endif extern int Py_Main(int,char**); #ifdef __cplusplus } #endif int main () { Py_Main(0,0) ; return 0; } configure:6236: result: no configure:6239: checking for -pthread configure:6263: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 -pthread conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl -lm >&5 powerpc-apple-darwin8-gcc-4.0.3: unrecognized option '-pthread' i686-apple-darwin8-gcc-4.0.3: unrecognized option '-pthread' /var/tmp//ccLaxAiT.s:41:indirect jmp without `*' /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/mpich2-install/lib/libpmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: warning /usr/local/mpich2-install/lib/libmpich.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) /usr/bin/ld: Undefined symbols: _PyMac_Error _Py_Main collect2: ld returned 1 exit status /usr/bin/ld: for architecture i686 /usr/bin/ld: Undefined symbols: _PyMac_Error _Py_Main collect2: ld returned 1 exit status lipo: can't open input file: /var/tmp//ccQtVm5I.out (No such file or directory) configure:6266: $? = 1 configure: failed program was: #line 6243 "configure" #include "confdefs.h" #ifdef __cplusplus extern "C" { #endif extern int Py_Main(int,char**); #ifdef __cplusplus } #endif int main () { Py_Main(0,0) ; return 0; } configure:6282: result: no configure:6286: checking for gcc libraries configure:6289: result: Found /usr/local/lib/gcc/i386-apple-darwin8.8.1/4.2.0/libgcc.a configure:6292: checking _eprintf bug workaround configure:6314: /usr/local/mpich2-install/bin/mpicc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 conftest.c -L/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/config -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -u _PyMac_Error -ldl -lm /usr/local/lib/gcc/i386-apple-darwin8.8.1/4.2.0/libgcc.a -lm >&5 /var/tmp//ccm6o9nT.s:41:indirect jmp without `*' //usrubin/ |
From: Pat M. <pat...@gm...> - 2006-12-01 16:59:42
|
Hola Jairo... I haven't build pyMPI with MPICH2 though I had expected that it would work. I have a parallel Fedora machine at home so I can try this over the weekend and let you know if I can repeat your results. Cheers, Pat On 12/1/06, Jairo Serrano <jai...@gm...> wrote: > Good Morning ! > > I dont know if is possible use pyMPI with MPICH2, but i have a problem: > > [root@I202-05 mpich2-1.0.4p1]# mpiexec -n 3 pyMPI helloworld.py > I202-05 send the message > > and there are no more answers !!! > > > The machines are Fedora Core 5 on DELL Dimension 3000, 3 CPU's identical: > > [root@I202-05 mpich2-1.0.4p1]# uname -a > Linux I202-05 2.6.15-1.2054_FC5 #1 Tue Mar 14 15:48:33 EST 2006 i686 i686 > i386 GNU/Linux > > pyMPI-2.4b2 and MPICH 2 1.0.4p1 is installed : > > [root@I202-05 mpich2-1.0.4p1]# which mpiexec > /usr/local/bin/mpiexec > [root@I202-05 mpich2-1.0.4p1]# which mpicc > /usr/local/bin/mpicc > [root@I202-05 mpich2-1.0.4p1 ]# which mpd > /usr/local/bin/mpd > [root@I202-05 mpich2-1.0.4p1]# which mpirun > /usr/local/bin/mpirun > [root@I202-05 mpich2-1.0.4p1]# which pyMPI > /usr/bin/pyMPI > > Then I got up the mpd daemon: > [root@I202-05 mpich2-1.0.4p1]# mpdboot -n 3 > [root@I202-05 mpich2-1.0.4p1]# > > i ran mpdtrace and cpi: > [root@I202-05 mpich2-1.0.4p1]# mpdtrace > I202-05 > I202-01 > I202-04 > [root@I202-05 mpich2-1.0.4p1]# mpiexec -n 3 examples/cpi > Process 0 of 3 is on I202-05 > Process 1 of 3 is on I202-01 > Process 2 of 3 is on I202-04 > pi is approximately 3.1415926544231323, Error is 0.0000000008333392 > wall clock time = 0.003288 > > My python program helloworld.py: > ----------------------------------------------- > 1 import mpi > 2 from socket import gethostname; > 3 > 4 if mpi.rank==0: > 5 print 'message sent by', gethostname() > 6 mens='i am the process' > 7 for i in range(mpi.size): > 8 mpi.send(mens,i); > 9 > 10 else: > 11 msg,status=mpi.recv() > 12 print msg, mpi.rank,' of ',mpi.size > 13 print 'message received by', gethostname() > ----------------------------------------------- > > > I hope that you could help me !!! thanks a lot !!! > -- > jDSL > Bucaramanga, Colombia > UIS Rugby Club (c) 2006 > SCALAR - EISI - UIS > |
From: Jairo S. <jai...@gm...> - 2006-12-01 16:45:41
|
Good Morning ! I dont know if is possible use pyMPI with MPICH2, but i have a problem: [root@I202-05 mpich2-1.0.4p1]# mpiexec -n 3 pyMPI helloworld.py I202-05 send the message and there are no more answers !!! The machines are Fedora Core 5 on DELL Dimension 3000, 3 CPU's identical: [root@I202-05 mpich2-1.0.4p1]# uname -a Linux I202-05 2.6.15-1.2054_FC5 #1 Tue Mar 14 15:48:33 EST 2006 i686 i686 i386 GNU/Linux pyMPI-2.4b2 and MPICH 2 1.0.4p1 is installed : [root@I202-05 mpich2-1.0.4p1]# which mpiexec /usr/local/bin/mpiexec [root@I202-05 mpich2-1.0.4p1]# which mpicc /usr/local/bin/mpicc [root@I202-05 mpich2-1.0.4p1]# which mpd /usr/local/bin/mpd [root@I202-05 mpich2-1.0.4p1]# which mpirun /usr/local/bin/mpirun [root@I202-05 mpich2-1.0.4p1]# which pyMPI /usr/bin/pyMPI Then I got up the mpd daemon: [root@I202-05 mpich2-1.0.4p1]# mpdboot -n 3 [root@I202-05 mpich2-1.0.4p1]# i ran mpdtrace and cpi: [root@I202-05 mpich2-1.0.4p1]# mpdtrace I202-05 I202-01 I202-04 [root@I202-05 mpich2-1.0.4p1]# mpiexec -n 3 examples/cpi Process 0 of 3 is on I202-05 Process 1 of 3 is on I202-01 Process 2 of 3 is on I202-04 pi is approximately 3.1415926544231323, Error is 0.0000000008333392 wall clock time = 0.003288 My python program helloworld.py: ----------------------------------------------- 1 import mpi 2 from socket import gethostname; 3 4 if mpi.rank==0: 5 print 'message sent by', gethostname() 6 mens='i am the process' 7 for i in range(mpi.size): 8 mpi.send(mens,i); 9 10 else: 11 msg,status=mpi.recv() 12 print msg, mpi.rank,' of ',mpi.size 13 print 'message received by', gethostname() ----------------------------------------------- I hope that you could help me !!! thanks a lot !!! -- jDSL Bucaramanga, Colombia UIS Rugby Club (c) 2006 SCALAR - EISI - UIS |
From: Vinicius P. <vpe...@ic...> - 2006-11-17 22:12:43
|
Well, As I'm using LAM/MPI... if I make sure that access to MPI functions are serialized (ie no concurrent access to MPI functions) using python threading.Lock for example that should work fine? And in the case I'm doing blocking MPI calls, what about to change to a non-blocking and use thread.wait() and notify() to synchronize when a message has arrived... to allow other Python thread make progress. A Thread named Receiver would be in a busy waiting Loop on irecv(), and calling appropriate functions depends on Tag message... there would be no blocking recv()... the problem will be on synchronizing threads using events or conditions... these ideas seems viable? Vinicius On 11/17/06, Pat Miller <pat...@gm...> wrote: > On 11/17/06, Vinicius Petrucci <vpe...@ic...> wrote: > > "Python uses a global interpreter lock for thread safety. That global > > lock must be held by a thread before it can safely access Python > > objects." > > The Python thread model is really pretty weak. This is a weakness in > reference counted languages (the incref/decref's are critical sections). > One can think of Python threads as "co-routines" even though each > thread is built on top of a real POSIX or Windows thread. You can have > as many as you want (upto the thread limit), but only one operates > at a time. Every so many byte code operations, the threads context > switch. > > A routine that takes a long time (like a blocking I/O call) is supposed to > play nice and release the global interpreter lock (and subsequently reacquire > it before returning to the interpreter). > > With MPI, your mileage will vary. An MPI implementation is allowed to put > restrictions on the use of MPI with threads. A common restriction is to > force all MPI calls to be issued from the master thread. A somewhat > weaker one is that all MPI calls must come from the *same* thread. > Weaker still is that only one MPI call can be extant at a given time (presumes > that the user provides locking). > > So, to get back to Vinicius' question. If your MPI supports threads > (e.g. OpenMPI), > then pyMPI will work fine with threads. The Python global interpreter lock > will guarantee that only one thread is working with MPI at a time (unless you > write an extension that releases the global lock). The bad news is that > the current configuration will not allow you to get much thread parallelism > going on. If you have a blocking receive in one thread, it will hold the > global interpreter lock, so no other Python thread can make progress. > > I have been thinking about modifying pyMPI to release the global interpreter > lock on entry to a pyMPI routine. Then, one could write threaded routines > to have blocking calls. To prevent simultaneous MPI calls, the pyMPI > routines would then need to acquire a pyMPI global lock before making > a call. > > Bottom line: pyMPI will work safely with threads, but the co-routine nature > of Python threads might not allow it to work the way you think. > > Pat > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > -- Vinicius Tavares Petrucci home page: http://www.ic.uff.br/~vpetrucci |
From: Pat M. <pat...@gm...> - 2006-11-17 14:04:10
|
On 11/17/06, Vinicius Petrucci <vpe...@ic...> wrote: > "Python uses a global interpreter lock for thread safety. That global > lock must be held by a thread before it can safely access Python > objects." The Python thread model is really pretty weak. This is a weakness in reference counted languages (the incref/decref's are critical sections). One can think of Python threads as "co-routines" even though each thread is built on top of a real POSIX or Windows thread. You can have as many as you want (upto the thread limit), but only one operates at a time. Every so many byte code operations, the threads context switch. A routine that takes a long time (like a blocking I/O call) is supposed to play nice and release the global interpreter lock (and subsequently reacquire it before returning to the interpreter). With MPI, your mileage will vary. An MPI implementation is allowed to put restrictions on the use of MPI with threads. A common restriction is to force all MPI calls to be issued from the master thread. A somewhat weaker one is that all MPI calls must come from the *same* thread. Weaker still is that only one MPI call can be extant at a given time (presumes that the user provides locking). So, to get back to Vinicius' question. If your MPI supports threads (e.g. OpenMPI), then pyMPI will work fine with threads. The Python global interpreter lock will guarantee that only one thread is working with MPI at a time (unless you write an extension that releases the global lock). The bad news is that the current configuration will not allow you to get much thread parallelism going on. If you have a blocking receive in one thread, it will hold the global interpreter lock, so no other Python thread can make progress. I have been thinking about modifying pyMPI to release the global interpreter lock on entry to a pyMPI routine. Then, one could write threaded routines to have blocking calls. To prevent simultaneous MPI calls, the pyMPI routines would then need to acquire a pyMPI global lock before making a call. Bottom line: pyMPI will work safely with threads, but the co-routine nature of Python threads might not allow it to work the way you think. Pat |
From: Vinicius P. <vpe...@ic...> - 2006-11-17 10:52:28
|
Hi Julian... Hmm... i've heard about this python Lock... "Python uses a global interpreter lock for thread safety. That global lock must be held by a thread before it can safely access Python objects." So, is it safe to call MPI functions (though pyMPI) by two simultaneous threads in python? I'm using threading python library and LAM/mpi runtime + pyMPI. I've attached a simple code in Python to explain better. it seems to run alright. but as it is an long term project that will be based on this simple one, I really want to make sure that I'm not going in wrong way choosing threads + mpi and in future get stuck with impossible to debug kinds of errors... Regards, Vinicius On 11/16/06, Julian Cook <jul...@ya...> wrote: > > > I think one of the major issues here is python itself. Are you thinking of > using threads within python? Python itself has an interpreter lock that > effectively prevents python threads executing simultaneously (you may > already be aware of this). > > Pat Miller would be better qualified to comment on the availablity of > MPI_THREAD_MULTIPLE itself. > > regards > > Julian Cook > > > ----- Original Message ---- > From: Vinicius Petrucci <vpe...@ic...> > To: pym...@li... > Sent: Thursday, November 16, 2006 6:30:54 PM > Subject: [Pympi-users] pyMPI + threading > > Hi all! > > pyMPI with threading support works alright? is it stable? with open > mpi for example... > > is it possible to use MPI_THREAD_MULTIPLE... or what ? is there any > limitation? > > if not, could you send me some tips on how to use python + mpi + > threading in a safe manner? or some examples? any experience in this > issue is worth. > > I was thinking about some design decision: > * use a Lock to provide exclusive access to mpi functions > * 1 thread in loop, executing "req = irecv()" and when some msg with a > type of TAG is available, it calls an appropriate function. it can use > some send functions inside too. > * other thread making some send calls... > > what common mistakes should I avoid? > > thanks in advance! > > > -- > Vinicius Tavares Petrucci > home page: http://www.ic.uff.br/~vpetrucci > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > > ________________________________ > Sponsored Link > > Mortgage rates near 39yr lows. $310,000 Mortgage for $999/mo - Calculate new > house payment > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > > > -- Vinicius Tavares Petrucci home page: http://www.ic.uff.br/~vpetrucci |
From: Julian C. <jul...@ya...> - 2006-11-17 00:12:58
|
I think one of the major issues here is python itself. Are you thinking=0Ao= f using threads within python? Python itself has an interpreter lock=0Athat= effectively prevents python threads executing simultaneously (you=0Amay al= ready be aware of this).=0A=0APat Miller would be better qualified to comme= nt on the availablity of MPI_THREAD_MULTIPLE itself.=0A=0Aregards=0A=0AJuli= an Cook=0A=0A=0A----- Original Message ----=0AFrom: Vinicius Petrucci <vpet= ru...@ic...>=0ATo: pym...@li...=0ASent: Thursday, = November 16, 2006 6:30:54 PM=0ASubject: [Pympi-users] pyMPI + threading=0A= =0AHi all!=0A=0ApyMPI with threading support works alright? is it stable? w= ith open=0Ampi for example...=0A=0Ais it possible to use MPI_THREAD_MULTIPL= E... or what ? is there any limitation?=0A=0Aif not, could you send me some= tips on how to use python + mpi +=0Athreading in a safe manner? or some ex= amples? any experience in this=0Aissue is worth.=0A=0AI was thinking about = some design decision:=0A* use a Lock to provide exclusive access to mpi fun= ctions=0A* 1 thread in loop, executing "req =3D irecv()" and when some msg = with a=0Atype of TAG is available, it calls an appropriate function. it can= use=0Asome send functions inside too.=0A* other thread making some send ca= lls...=0A=0Awhat common mistakes should I avoid?=0A=0Athanks in advance!=0A= =0A=0A-- =0AVinicius Tavares Petrucci=0Ahome page: http://www.ic.uff.br/~vp= etrucci=0A=0A--------------------------------------------------------------= -----------=0ATake Surveys. Earn Cash. Influence the Future of IT=0AJoin So= urceForge.net's Techsay panel and you'll get the chance to share your=0Aopi= nions on IT & business topics through brief surveys - and earn cash=0Ahttp:= //www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV= =0A_______________________________________________=0APympi-users mailing li= st=0AP...@li...=0Ahttps://lists.sourceforge.net/list= s/listinfo/pympi-users=0A=0A=0A=0A=0A=0A=0A=0A =0A_________________________= ___________________________________________________________=0ASponsored Lin= k=0A=0AMortgage rates near 39yr lows. =0A$310k for $999/mo. Calculate new p= ayment! =0Awww.LowerMyBills.com/lre |
From: Vinicius P. <vpe...@ic...> - 2006-11-16 23:31:05
|
Hi all! pyMPI with threading support works alright? is it stable? with open mpi for example... is it possible to use MPI_THREAD_MULTIPLE... or what ? is there any limitation? if not, could you send me some tips on how to use python + mpi + threading in a safe manner? or some examples? any experience in this issue is worth. I was thinking about some design decision: * use a Lock to provide exclusive access to mpi functions * 1 thread in loop, executing "req = irecv()" and when some msg with a type of TAG is available, it calls an appropriate function. it can use some send functions inside too. * other thread making some send calls... what common mistakes should I avoid? thanks in advance! -- Vinicius Tavares Petrucci home page: http://www.ic.uff.br/~vpetrucci |