Thread: [Pympi-users] Idle as the console
Status: Alpha
Brought to you by:
patmiller
From: Julian C. <rjc...@cs...> - 2005-08-17 16:21:40
|
Pat I tried using idle as the main interpreter for pyMPI at np >1. But not suprisingly it didn't work. I executed pyMPI with the following simple script: % mpirun - np 2 pyMPI pyMPI_idle.py -n Contents of pyMPI_idle.py: import mpi if (mpi.rank == 0): import idlelib.PyShell idlelib.PyShell.main() idle will run, but with no communication. When I inspected the stacks (using pstack), not suprisingly, they were quite different, though part of it is because the idle stack actually executes the above script: Stack for the python interactive console: You can see that from Py_Main, we end up at call_readline after parallelReadline.. 4280: /home/jcook/python/Python-2.4/bin/pyMPI -p4pg /ironside/home/jcook/PI4 ----------------- lwp# 1 / thread# 1 -------------------- fef9d5fc poll (ffbee3a8, 1, 64) fef4d534 select (1, 0, 0, ffbee3b0, fefbf1bc, ffbee3a8) + 348 feddaa28 select (252274, 0, fefbc008, fed774d0, 1, fed83480) + 34 fed83294 call_readline (2c8668, 2c8678, 466c74, 4159d8, 2c8668, 0) + 7c 0003df64 parallelReadline (2c8668, 2c8678, 466c74, 0, 4990f8, 4a00c0) + 36c 00065618 PyOS_Readline (2c8c00, 252000, 35ea48, 264c00, 2c7c00, 466c74) + e0 00106fb8 tok_nextc (1413f08, 10, 2, c, a, b) + 64 00107b88 tok_get (2c7e40, 22fff4, 2c7ce8, 7f, 3a3d10, 3a3818) + 70 00108830 PyTokenizer_Get (1413f08, ffbeede4, ffbeede0, 27556c, 2c8668, 35f030) + c 00105c50 parsetok (0, 27556c, 100, ffbeee54, 0, 4a4b58) + 64 000e389c PyRun_InteractiveOneFlags (466ca0, 466cb4, ffbeef9c, 29b4f4, 466c74, 466c60) + 15c 000e3714 PyRun_InteractiveLoopFlags (466ca0, 29b4f4, 2c8668, ffbeef9c, 22e9c0, 466c60) + fc 000e35dc PyRun_AnyFileExFlags (2c8668, 2c8668, 0, ffbeef9c, 29b4f4, 0) + 38 000ea664 Py_Main (2c8688, 29b468, 2c8668, 0, 0, 1) + 8e0 00035f6c pyMPI_Main_with_communicator (1, ffbef12c, ffbef130, 5b, fefc21d0, 0) + 19c 0003600c pyMPI_Main (1, ffbef12c, ffbef130, fef1bc20, 31ea0, 0) + 2c 00035d98 main (5, ffbef14c, ffbef164, 251400, 0, 0) + 30 00035d40 _start (0, 0, 0, 0, 0, 0) + b8 The stack for idle is completely different. I had to show only the first portion up to the select loop: 29203: /home/jcook/python/Python-2.4/bin/pyMPI pyMPI_idle.py -n -p4pg /ironsi 00103e70 Tkapp_MainLoop (1, 35ea48, 0, 2c0950, 642670, 26e274) + 1dc 000c3cfc call_function (60d508, 35ea48, 0, 642670, ffbee914, 3bd89c) + 3b8 000c0d10 PyEval_EvalFrame (bd20c, 26f0fc, 26f0f0, 253c54, 35ea48, 0) + 38c8 000c2304 PyEval_EvalCodeEx (53a95c, 4, 1, 8, 0, 1) + 9f8 000c4018 fast_function (53c7b0, ffbeeb1c, 1, 1, 0, 469270) + 170 000c3df8 call_function (43a144, 468990, 0, 53c7b0, ffbeeb1c, 43a140) + 4b4 000c0d10 PyEval_EvalFrame (bd20c, 26f0fc, 26f0f0, 253c54, 35ea48, 0) + 38c8 000c3f7c fast_function (ffbeecac, 35ea48, 439fb0, 0, 0, 0) + d4 000c3df8 call_function (3a37b8, 0, 0, 404af0, ffbeecac, 3a37b4) + 4b4 000c0d10 PyEval_EvalFrame (bd20c, 26f0fc, 26f0f0, 253c54, 35ea48, 0) + 38c8 000c2304 PyEval_EvalCodeEx (0, 377a50, 0, 0, 0, 0) + 9f8 000bd1d4 PyEval_EvalCode (460060, 377a50, 377a50, 4688e0, 35f030, 35f030) + 28 000e4d28 run_node (368368, 460060, 377a50, 377a50, ffbeeecc, 368368) + 3c 000e3c98 PyRun_SimpleFileExFlags (4686d8, 377a50, ffbef299, ffbeeecc, 1, 2c8698) + 1b4 000ea664 Py_Main (3, 29b468, 2c8668, 0, 0, 1) + 8e0 00035f6c pyMPI_Main_with_communicator (1, ffbef05c, ffbef060, 5b, fefc21d0, 0) + 19c 0003600c pyMPI_Main (1, ffbef05c, ffbef060, fef1bc20, 31ea0, 0) + 2c 00035d98 main (7, ffbef07c, ffbef09c, 251400, 0, 0) + 30 00035d40 _start (0, 0, 0, 0, 0, 0) + b8 Obviously the big question is: Is it even feasible to consider using a graphical shell such as idle, or are they just too different? regards Julian Cook |
From: Pat M. <pat...@ll...> - 2005-08-17 16:46:54
|
Perhaps % mpirun -np 2 ./pyMPI -i pyMPI_idle.py I can't test here at work (TCP locked down, so IDLE loopback doesn't work), but will try at home tonight. I may need to modify the input scheme somewhat so that only one process is reading interactively while the others are non-interactive. This may simplify running the debugger too. Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller It is not what we do, but also what we do not do, for which we are accountable. -- Moliere, actor and playwright (1622-1673) |
From: Julian C. <rjc...@cs...> - 2005-08-18 20:33:04
|
Hi Pat, I tried it: Idle seems to work, but it appears that there is no communication with the sub-process. I made it hang by doing following: >>> import mpi >>> mpi.size 2 >>> r = mpi.rank * 2 >>> mpi.reduce(r, mpi.SUM) Looking at the stack for the master process yields following: (i.e. it seems to be waiting for a reply, but nothing was ever sent, so I can t be sure that the child even exec'd any of the statements) 12626: /home/jcook/python/Python-2.4/bin/pyMPI -i pyMPI_idle.py -n -p4pg /iro ----------------- lwp# 1 / thread# 1 -------------------- fef9d5fc poll (ffbe7030, 1, 2328) fef4d534 select (6, 0, 0, ffbe7038, fefbf1bc, ffbe7030) + 348 feddaa28 select (1, 2aa908, 0, ffffffff, 0, 613968) + 34 001721ec recv_message (2c8938, 2c7ee0, 1, ffffffff, 357d38, 357dc1) + 2c 00171f58 p4_recv (2c8938, 2c7ee0, ffbe72d8, 2c9138, ffbe72ec, 3bc528) + 78 00179aa8 MPID_CH_Check_incoming (2cc4d0, 1, 200, 1, 3db, 4) + 348 00161c44 MPID_RecvComplete (ffbeb408, ffbebb94, ffbeb500, 35bf58, 1, 3db) + 124 00163fe8 MPID_RecvDatatype (3be268, ffbebbb0, 1, 35bf58, 1, 3db) + 88 001190f0 MPI_Recv (ffbebbb0, 1, 87, 1, 3db, 85) + 2d8 00064b10 pyMPI_recv (387920, 1, 3db, 0, 0, 0) + 1d8 00059db8 pyMPI_collective (387920, 0, 3a2fec, 0, 0, 613968) + 290 0005c270 reduction (ffbebf08, 613968, 0, 0, 0, 0) + 500 0005d7b8 pyMPI_collective_reduce (387920, 613968, 0, 5d788, 3d6c60, 4a2658) + 30 000c3cfc call_function (613968, 35f268, 0, 3d6c60, ffbec08c, 43aaec) + 3b8 000c0d10 PyEval_EvalFrame (bd20c, 26f0fc, 26f0f0, 253c54, 35f268, 0) + 38c8 000c2304 PyEval_EvalCodeEx (0, 378a50, 0, 0, 0, 0) + 9f8 000bd1d4 PyEval_EvalCode (69b3a0, 378a50, 378a50, 1, 0, 2582d4) + 28 000c55c4 exec_statement (2557d8, 69b3a0, 378a50, 0, 4396e0, 378a50) + 29c ....everything before this removed..I can send the whole thing if you want it. FYI: See my email from yesterday - check if you can switch off loopback with '-n' i.e. % mpirun -np 2 ./pyMPI -i pyMPI_idle.py -n in PyShell.py -n is documented as "run IDLE without a subprocess" -----Original Message----- From: pym...@li... [mailto:pym...@li...]On Behalf Of Pat Miller Sent: Wednesday, August 17, 2005 12:47 PM To: rjc...@cs...; pym...@li... Subject: Re: [Pympi-users] Idle as the console Perhaps % mpirun -np 2 ./pyMPI -i pyMPI_idle.py ..snip.. |
From: Pat M. <pat...@ll...> - 2005-08-18 19:59:08
|
I modified Julian's IDLE startup script to put in an input adapter % cat pyMPI_idle.py import mpi import sys if (mpi.rank == 0): import idlelib.PyShell class adapter(idlelib.PyShell.ModifiedInterpreter): base = idlelib.PyShell.ModifiedInterpreter def runsource(self,source): mpi.bcast_input_to_slaves(source+'\n') return self.base.runsource(self,source) idlelib.PyShell.ModifiedInterpreter = adapter idlelib.PyShell.main() I added a routine to the mpi module that allows alternate IO routines to properly broadcast input data to the waiting slaves... Its called (unimaginatively) mpi.bcast_input_to_slaves() The adapter in IDLE calls it right before it executes a completed IDLE input line. I did a very quick test.... % mpirun -np 4 pyMPI -i pyMPI_idle.py -n < now in the IDLE window > >>> 3 3 >>> mpi.rank 0 >>> mpi.allreduce(mpi.rank.mpi.SUM) 6 Requires lastest CVS snapshot (don't forget to run ./boot before the ./configure) Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller It is not what we do, but also what we do not do, for which we are accountable. -- Moliere, actor and playwright (1622-1673) |
From: Julian C. <rjc...@cs...> - 2005-08-18 17:23:45
|
Hi Pat I re-built pyMPI from cvs and was able to re-produce your test. For instance, see following test: Python 2.4 (pyMPI 2.1b4) on sunos5 Type "copyright", "credits" or "license()" for more information. ... IDLE 1.1 ==== No Subprocess ==== >>> import mpi >>> jump = mpi.rank * 25000 >>> mpi.synchronizedWrite( mpi.rank,jump,"\n" ) 0 0 1 25000 >>> It WILL freeze if you don't use '-n' in the command line though i.e. % mpirun -np 4 pyMPI -i pyMPI_idle.py -n HOWEVER If I load and run a script from within Idle: 1. This statement calc's to zero because mpi.rank appears to not return a number: jump = TotalSteps * (mpi.rank + offset) 2. This statement causes a perma-freeze, until I kill everything: mpi.synchronizedWrite( mpi.rank,jump,"\n" ) (At this point the master process (mpi.rank = 0) is stuck in recv_message() ) Both statements appeared to work when I ran them interactively regards Julian Cook |