MPI-Dept of Physics - Carnegie Mellon University

Physics Department Computing Facility:  MPI

The MPI libraries are installed on Linux computers in /usr/user/mpich. The implementation is MPICH and user information can be found at the Argonne National Labs web site. To use it you must set your path to include /usr/user/mpich/{g77,pgf90}/bin. Run mpif77, mpif90 or mpicc to compile fortran or C programs. To execute the program on # processors (where # is an integer), enter
mpirun -np # program < input > output& 
Your program must be in your path on every computer (e.g. ~/bin on the Andrew system). Due to license restrictions, mpif90 is available only on euler.phys, though the executable can be run on any computer.

Configuration files for MPI are set up so that jobs started on stokes.phys or yang.phys can utilize up to 10 processors, jobs started on fermat.phys or euler.phys can utilize up to 6 processors, and jobs started on brillouin.phys can utilize up to 2 processors. Use of MPI carries heavy overhead, so it will usually be more effective to write your program without MPI and just run multiple copies of the single processor version. Alternatively, you may wish to use Open MP shared memory processing. Do not run parallel programs without careful performance tuning and proof through benchmarking that high parallel efficiency is achieved.

As with any background job, the program must be "niced" at level 15. Unfortunately, nicing does not seem to be supported by mpirun. If the program is written in C, the option -mpinice 15" can be given to MPI_INIT. Otherwise, there seems to be no alternative to logging into each computer the program is running on and renicing each process. Please study our CPU usage guidelines.