Message Passing Interface
The Message Passing Interface (MPI) is a computer communications protocol. It is a de facto standard for communication among the nodes running a parallel program on a distributed memory system. MPI implementations consist of a library of routines that can be called from Fortran, C, C++ and Ada programs. The advantage of MPI over older message passing libraries is that it is both portable (because MPI has been implemented for almost every distributed memory architecture) and fast (because each implementation is optimized for the hardware on which it runs). Foundation for Microsoft Clustering Server. Often compared with PVM and at one stage merged with PVM to form PVMMPI.
Example program
Here is "Hello World" in MPI. Actually we send a "hello" message to each processor, manipulate it trivially, send the results back to the main processor, and print the messages out.
/* test of MPI */ #include <mpi.h> #include <stdio.h> #include <string.h> int main(int argc, char *argv[]) { char idstr[32]; char buff[128]; int numprocs; int myid; int i; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if(myid == 0) { printf("We have %d processors\n", numprocs); for(i=1;i<numprocs;i++) { sprintf(buff, "Hello %d! ", i); MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); } for(i=1;i<numprocs;i++) { MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat); printf("%s\n", buff); } } else { MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat); sprintf(idstr, "Processor %d ", myid); strcat(buff, idstr); strcat(buff, "reporting for duty\n"); MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); return 0; }
See also
- Open MPI
- LAM/MPI
- OpenMP
- MPICH
- Unified Parallel C
- Occam programming language
- Linda (coordination language)
- Parallel Virtual Machine
- Calculus of Communicating Systems
- Calculus of Broadcasting Systems
- Actor model
External links
- MPI specification
- MPI DMOZ category
- Open MPI web site
- LAM/MPI web site
- MPICH
- SCore MPI
- Scali MPI
- HP-MPI
- MVAPICH: MPI over InfiniBand
- Parawiki page for MPI
- Global Arrays
- PVM/MPI Users' Group Meeting (2006 edition)
- MPI Samples
- Manage MPI jobs with Moab
- MPICH over Myrinet (GM, classic driver)
- MPICH over Myrinet (MX, next-gen driver)
- Parallel Programming with MatlabMPI
- MPI Tutorial
- Parallel Programming with MPI
References
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.