I have built foam-extend 3.2 on 4 different linux cluster with the system mpi, and I have been having problems running in parallel. The cases use simplefoam. I can run in serial on these systems, but when I move to running in parallel they range from not starting to exiting in the middle of the runs.
I have compile on these machines with these compilers and system mpis:
Penguin Beowulf cluster with GCC 4.5.1 and system mpich2
SGI ICE X cluster with GCC and system mpi of SGI MPT
SGI ICE X cluster with intel 16 and system mpi of SGI MPT
IBM iDataPlex with intel compiler and system mpi of IBMPE
On the Penguin Beowulf cluster jobs start and run for a few hundred to a thousand or so iterations and exit with MPI ALLREDUCE errors. I am trying to run for about 10000 iterations on all of my cases. Solution looks good at the point of exit in residuals, flow field, and forces.
On SGI with GCC and SGI MPT, the cases start and exit a few hundred iterations. Solution looks good at the point of exit in residuals, flow field, and forces.
On SGI with intel 16 and SGI MPT, it will not go past the point of reading turbulence model information on a parallel run.
On IBM with intel and IBM PE, it start runs less then a hundred iterations and exits.
When compiling on the different machines, I have set WM_MPLIB to on Penguin MPICH, on SGI to SGI MPT, on IBM to MPI. I have set WM_COMPILER to Gcc or ICC depending on which compiler I am using. I have modified settings.sh and settings.csh (my account is tcsh on these systems) to account for the system locations of the MPIs. I have also modifile in wmake/rules the appropriate mplibMPI{x} to pick up system information. I have three cases that I am testing and they range from 2.2 million cells to 5.3 million cells to 52 millions cells. Also I have used OpenFOAM-1.6-ext, with the same compile and MPI settings on the same machines and I have had no problems runing in parallel
Dory