Thread: [Pympi-users] ValueError when attempting to access mpi.irecv() result object
Status: Alpha
Brought to you by:
patmiller
From: <emi...@en...> - 2006-11-03 23:37:28
|
I have coded up a fairly simple Manager/Worker style MPI application from within pyMPI and have been using it for some time now to run some jobs. At the core of the Manager process i have in essense request =3D mpi.irecv() while( there is work to do ) if ( there is an idle worker ) mpi.send( job to idle worker ) if request: process worker result if ( more results expected ) request =3D mpi.irecv() this is certainly a simplified version of the code, but the algorithm and the calls to mpi.irecv are provided to show where i'm doing this. This has run fine without issue for several weeks now. Recently the issue below has started to crop up. My only guess for the reason is that my job/result sizes are much larger than they used to be. This might be increasing the likely hood that the issue will arise (it's not 100% but i can get it to crash about 1/5 application runs) The error occurs below on the line if request: ValueError: Fatal internal unpickling error I am concerned that I am not using the mpi module packaged with pyMPI correctly, should I be using a different algorithm for dispatching the jobs to worker processes? I'm just not sure what is causing this since I have made no changes to the Manager/Worker code module I developed and th= e only differences is the larger job/result messages (around 500+ character= s now) Eamon Millman |
From: Pat M. <pat...@gm...> - 2006-11-04 01:41:31
|
This one sounds pretty serious... The request object is really in need of some overhaul (it remained basically untouched during the great 2.0 rewrite in 2003). I will see if I can recreate. If not, I can send you a patch that will help shed light on what is happening. Pat On 11/3/06, emi...@en... <emi...@en...> wrote: > I have coded up a fairly simple Manager/Worker style MPI application from > within pyMPI and have been using it for some time now to run some jobs. > > At the core of the Manager process i have in essense > > request = mpi.irecv() > while( there is work to do ) > if ( there is an idle worker ) > mpi.send( job to idle worker ) > if request: > process worker result > if ( more results expected ) > request = mpi.irecv() > > this is certainly a simplified version of the code, but the algorithm and > the calls to mpi.irecv are provided to show where i'm doing this. > > This has run fine without issue for several weeks now. Recently the issue > below has started to crop up. My only guess for the reason is that my > job/result sizes are much larger than they used to be. This might be > increasing the likely hood that the issue will arise (it's not 100% but i > can get it to crash about 1/5 application runs) > > The error occurs below on the line > > if request: > ValueError: Fatal internal unpickling error > > I am concerned that I am not using the mpi module packaged with pyMPI > correctly, should I be using a different algorithm for dispatching the > jobs to worker processes? I'm just not sure what is causing this since I > have made no changes to the Manager/Worker code module I developed and the > only differences is the larger job/result messages (around 500+ characters > now) > > Eamon Millman > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Pympi-users mailing list > Pym...@li... > https://lists.sourceforge.net/lists/listinfo/pympi-users > |