c-mpi-discuss Mailing List for C-MPI
Status: Pre-Alpha
Brought to you by:
jmwozniak
You can subscribe to this list here.
| 2011 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|---|
|
From: Justin M W. <wo...@mc...> - 2011-02-12 15:06:59
|
Hello Xi I will have to get back on the SiCortex to take a look at this- I will get back to you early this week. I have run this on a SiCortex but it has been a while... Justin On Fri, 11 Feb 2011, Xi Duan wrote: > Hi all, > I followed the the quick start on the home page > (http://c-mpi.sourceforge.net/), and it was all good > when I was doing the setup and configure steps with mpich2-1.2.1p1, and my > machine is Sicortex072. But I met problems while the make step. > > At the beginning, it turned out the error of > "//home/xduan/mpich2-1.2.1p1-install/bin/mpdroot: open failed for root's mpd > conf filempdlistjobs (__init__ 1208): forked process failed; status=255/". > Then I followed the discuss threads at > http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-March/006782.html, and > set /mpd.conf /file at both//etc// and//usr/etc//, moreover, I didn't include > the /"MPD_USE_ROOT_MPD=1" /thing due to I just use a regular user started > ring. But it returned the error at /"test/mpirpc/test-ping.out.failed"/, > which says /"srun: error: Unable to allocate resources: No partition > specified or system default partition"/. > > So I realized I need to run multiple instance instead of just single process. > Then I made the ring and used "srun" to start multiple processes on different > machines. However, I still get the error message at > /"test/mpirpc/test-ping.out.failed"/, the following paragraph is the error > code: > > /hello > hello > Using single debug file > [0] debug_rank: 0 > Fatal error in MPI_Isend: Invalid rank, error stack: > MPI_Isend(146): MPI_Isend(buf=0x1200dc3c8, count=1, MPI_INT, dest=1, tag=0, > MPI_COMM_WORLD, request=0x120115c30) failed > MPI_Isend(96).: Invalid rank has value 1 but must be nonnegative and less > than 1 > Using single debug file > [0] debug_rank: 0 > Fatal error in MPI_Isend: Invalid rank, error stack: > MPI_Isend(146): MPI_Isend(buf=0x1200dc3c8, count=1, MPI_INT, dest=1, tag=0, > MPI_COMM_WORLD, request=0x120115c30) failed > MPI_Isend(96).: Invalid rank has value 1 but must be nonnegative and less > than 1 > srun: error: task 0-1: Exited with exit code 1 > un: error: task 1: Exited with exit code 1/ > > Now I am stopped by this problem, could anyone provide any idea? BTW, I also > tried a root-started ring with /"MPD_USE_ROOT_MPD=1" /in two locations. > Nevertheless, they have the same problem. > > Looking forward to your responses. > > Thanks, > Xi > -- Justin M Wozniak |
|
From: Xi D. <xd...@ii...> - 2011-02-11 22:35:05
|
Hi all, I followed the the quick start on the home page (http://c-mpi.sourceforge.net/), and it was all good when I was doing the setup and configure steps with mpich2-1.2.1p1, and my machine is Sicortex072. But I met problems while the make step. At the beginning, it turned out the error of "//home/xduan/mpich2-1.2.1p1-install/bin/mpdroot: open failed for root's mpd conf filempdlistjobs (__init__ 1208): forked process failed; status=255/". Then I followed the discuss threads at http://lists.mcs.anl.gov/pipermail/mpich-discuss/2010-March/006782.html, and set /mpd.conf /file at both//etc// and//usr/etc//, moreover, I didn't include the /"MPD_USE_ROOT_MPD=1" /thing due to I just use a regular user started ring. But it returned the error at /"test/mpirpc/test-ping.out.failed"/, which says /"srun: error: Unable to allocate resources: No partition specified or system default partition"/. So I realized I need to run multiple instance instead of just single process. Then I made the ring and used "srun" to start multiple processes on different machines. However, I still get the error message at /"test/mpirpc/test-ping.out.failed"/, the following paragraph is the error code: /hello hello Using single debug file [0] debug_rank: 0 Fatal error in MPI_Isend: Invalid rank, error stack: MPI_Isend(146): MPI_Isend(buf=0x1200dc3c8, count=1, MPI_INT, dest=1, tag=0, MPI_COMM_WORLD, request=0x120115c30) failed MPI_Isend(96).: Invalid rank has value 1 but must be nonnegative and less than 1 Using single debug file [0] debug_rank: 0 Fatal error in MPI_Isend: Invalid rank, error stack: MPI_Isend(146): MPI_Isend(buf=0x1200dc3c8, count=1, MPI_INT, dest=1, tag=0, MPI_COMM_WORLD, request=0x120115c30) failed MPI_Isend(96).: Invalid rank has value 1 but must be nonnegative and less than 1 srun: error: task 0-1: Exited with exit code 1 un: error: task 1: Exited with exit code 1/ Now I am stopped by this problem, could anyone provide any idea? BTW, I also tried a root-started ring with /"MPD_USE_ROOT_MPD=1" /in two locations. Nevertheless, they have the same problem. Looking forward to your responses. Thanks, Xi |