Thread: [Pympi-users] pyMPI 2.4 beta 2 available....
Status: Alpha
Brought to you by:
patmiller
From: Pat M. <pat...@ll...> - 2005-07-27 21:57:22
|
Thanks to Julian, I uncovered a bad bug in pack/unpack in which short strings were sometimes truncated with an EOF error. You should strongly consider downloading 2.4b2 to replace any 2.4b1 version you have. I will also start working on some enhancements including better MIMD support, turning on the unimplemented SIMD array maps, and improving the hidden RCO (remote, co-operating objects) model that lets you invoke methods on remote objects. Thanks as always! Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller I have suffered from being misunderstood, but I would have suffered a hell of a lot more if I had been understood. -- Clarence Darrow (1857-1938) |
From: Thomas H. <tha...@bi...> - 2005-07-28 07:20:48
|
Hi Pat, On Wednesday 27 July 2005 23:57, Pat Miller wrote: > Thanks to Julian, I uncovered a bad bug in pack/unpack in which > short strings were sometimes truncated with an EOF error. You > should strongly consider downloading 2.4b2 to replace any 2.4b1 > version you have. I was wondering if the memory leak problem associated with buffer2 (I submitted a patch for this) and the problem of the extra stdout output are fixed as well? Best regards & thanks for a great tool, -- Thomas Hamelryck, Postdoctoral researcher Bioinformatics center Institute of Molecular Biology and Physiology University of Copenhagen Universitetsparken 15 Bygning 10 2100 Copenhagen, Denmark --- http://www.binf.ku.dk/users/thamelry/ |
From: Julian C. <rjc...@cs...> - 2005-07-28 15:56:05
|
Thomas Can you explain what the problem of the extra stdout output is? Pat probably knows, but I'm not sure what this means. Tks Julian Cook FNX Ltd -----Original Message----- From: pym...@li... [mailto:pym...@li...]On Behalf Of Thomas Hamelryck Sent: Thursday, July 28, 2005 3:05 AM To: Pat Miller; pym...@li... Subject: Re: [Pympi-users] pyMPI 2.4 beta 2 available.... Hi Pat, On Wednesday 27 July 2005 23:57, Pat Miller wrote: > Thanks to Julian, I uncovered a bad bug in pack/unpack in which > short strings were sometimes truncated with an EOF error. You > should strongly consider downloading 2.4b2 to replace any 2.4b1 > version you have. I was wondering if the memory leak problem associated with buffer2 (I submitted a patch for this) and the problem of the extra stdout output are fixed as well? Best regards & thanks for a great tool, -- Thomas Hamelryck, Postdoctoral researcher Bioinformatics center Institute of Molecular Biology and Physiology University of Copenhagen Universitetsparken 15 Bygning 10 2100 Copenhagen, Denmark --- http://www.binf.ku.dk/users/thamelry/ ------------------------------------------------------- SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Pympi-users mailing list Pym...@li... https://lists.sourceforge.net/lists/listinfo/pympi-users |
From: <tha...@bi...> - 2005-07-28 16:15:29
|
> Thomas > > Can you explain what the problem of the extra stdout output is? Pat > probably > knows, but I'm not sure what this means. Hi Julian, pyMPI_comm.c, pyMPI_comm_collective.c and pyMPI_comm_misc.c contain PyObject_Print statements that print out a lot of what seems to be debugging info. This generates a huge amount of irrelevant output that clogs log files. I'm very near Mocapy 1.0 - a toolkit that does parallelized learning and inference in Dynamic Bayesian Networks (https://sourceforge.net/projects/mocapy/). It depends on pyMPI so I'd like to see these things fixed before I claim Mocapy is ready-to-use. The memory leak problem is particularly problematic, of course, since Mocapy typically runs for several days. Cheers, -Thomas |
From: Julian C. <rjc...@cs...> - 2005-08-02 00:59:11
|
Web Services and pyMPI: Recently (that would be this w/e) I looked at how to set up a web service in python. The major reason was to make code written on python available from other platforms, especially Windows and Excel. I installed SOAPpy 0.12 (http://pywebsvcs.sourceforge.net/) on both windows and Solaris and suffice to say, it's very easy to use python client to python server. SOAPpy handles all the complexity for you. The real complexity was having to use WSDL descriptor files to be able to call python servers from Excel (Using MS SoapClient 3.0). Basically on Windows, you have? to use a WSDL and you need to use namespaces or it doesn't work. Effectively, you have to pre-define all functions, inputs and outputs in the WSDL file. Now to the point: You probably aren't shocked to find out that I was able to get pyMPI to act as a soap server. My reasoning was simple: why wait 10 seconds to run something on windows, when I can run the same thing in 1 second on pyMPI? Below is a simple example of pyMPI running at np 1, generating and returning a short array of random numbers to a soap client. The printout is from the client side. Issues: The major issue that occurs to me regarding pyMPI is that most parallel scripts are by nature long running processes. This conflicts with the nature of a server, which is supposed to handle multiple clients and multiple tasks simultaneously. This suggests that the server itself would have to either queue requests or manage the slave processes. That seems to add a lot of complexity. Anyway let me know if you think it has any utility for you. regards Julian Cook >>> # Call a pyMPI soap server >>> import SOAPpy >>> server = SOAPpy.SOAPProxy("http://brahma:8007/",namespace = "python-os") >>> seed = 69069 >>> server.config.dumpSOAPIn = 1 # debugging flag >>> list = server.ArrayOfFloats(seed) *** Incoming SOAP ****************************************************** <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" > <SOAP-ENV:Body> <ArrayOfFloatsResponse SOAP-ENC:root="1"> <Result SOAP-ENC:arrayType="xsd:double[10]" xsi:type="SOAP-ENC:Array"> <item>0.70316610192589002</item> <item>0.37607240403594255</item> <item>0.018328414124517245</item> <item>0.87040174120950486</item> <item>0.20098065818385713</item> <item>0.52939641382095703</item> <item>0.68237564433556841</item> <item>0.076620094609716638</item> <item>0.50193267461011581</item> <item>0.53994633368761047</item> </Result> </ArrayOfFloatsResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ************************************************************************ |
From: Pat M. <pat...@ll...> - 2005-07-28 21:01:59
|
> pyMPI_comm.c, pyMPI_comm_collective.c and pyMPI_comm_misc.c > contain PyObject_Print statements that print out a lot of what All but one of the PyObject_Print statements are gone (I just deleted the one I missed which is a debug print only when raising an exception if you illegally slice a communicator). I haven't verified the leak status though I did plug some holes in the 2.4b1 release. Is there a mocapy run that illustrates the problem? I could use it in the release test suite then. Pat -- Pat Miller | (925) 423-0309 | http://www.llnl.gov/CASC/people/pmiller Be the change you want to see in the world. -- Mahatma Gandhi (1869-1948) |
From: <tha...@bi...> - 2005-07-28 21:23:33
|
> All but one of the PyObject_Print statements are gone (I just > deleted the one I missed which is a debug print only when raising > an exception if you illegally slice a communicator). Great - thanks. > I haven't verified the leak status though I did plug some holes > in the 2.4b1 release. Is there a mocapy run that illustrates the > problem? I could use it in the release test suite then. I've got a small test script that illustrates the problem - I'll send it tomorrow. Best regards, -Thomas |
From: <tha...@bi...> - 2005-07-29 11:54:43
|
> I haven't verified the leak status though I did plug some holes > in the 2.4b1 release. Is there a mocapy run that illustrates the > problem? I could use it in the release test suite then. The problem arises when using scatter with large lists. Long running jobs crash because the slave nodes run out of memory due to a memory leak. The following script for example quickly eats memory: --- import mpi while 1: a=None if mpi.rank==0: a=range(0,1000000) b=mpi.scatter(a) --- The problem is due to a line in pyMPI_send.c. For long messages, buffer2 needs to be cleaned up by calling pyMPI_message_free. But buffer2 is erroneoulsy set to 0 right before calling pyMPI_message_free, so the memory is not free'd. Removing the "buffer2 = 0" line gets rid of the leak. Here's the offending piece of code: --- if ( buffer1.bytes_in_second_message ) { MPICHECK( self->communicator, MPI_Send(buffer2, buffer1.bytes_in_second_message, MPI_BYTE,destination,tag, self->communicator) ); buffer2 = 0; /* ERROR (?) */ } pyMPI_message_free(&buffer1,&buffer2); --- Best regards, -Thomas |