From: Bausch, A. W. <Bau...@mo...> - 2002-08-13 15:01:36
|
OK, I know MPI can work at the same time openMosix is running, but I'm not sure I'm getting the benefits of the openMosix File System when running MPI. I'm using MPICH and the openMosix 2.4.18-3 release (with DFSA enabled) on a three node cluster. I'm running a threaded version of the Smith-Waterman algorithm (a more CPU intensive version of BLAST), which uses one node as the head node as distributes the work to the two other nodes in my cluster. I have the mfs mount set up so that I can see all the file system to all the nodes in the cluster through /mfs/# Here's what's happening: I start things from node 3, and the load on node 3 shoots up. The load on 3 stays up around .75 for a few seconds while the loads on 1 & 2 stay between 0 and .25. What I think is happening here is that the MPI program is copying parts of the database across the cluster. After that, the load on 3 goes down and the load on 1 & 2 shoots up, which is supposed to happen. Then the results are return to node 3. The reason I think oMFS isn't being taken advantage of is because it takes about the same amount of time whether I enter the path to the database as /usr/local/blast/database or /mfs/here/local/blast/database Or maybe dfsa is enabled in both cases and that's why I'm not seeing a speed up? This is more for curiosity than anything else, the searches take a lot longer than the data transfer, so the speed up would be marginal. Anyone have any thought? Andy B. |