Message Passing Interface: Difference between revisions

Content deleted Content added
I added a citation
No edit summary
Line 21:
MPI is not sanctioned by any major standards body; nevertheless, it has become a [[de facto standard|''de facto'' standard]] for [[communication]] among processes that model a [[parallel programming|parallel program]] running on a [[distributed memory]] system. Actual distributed memory supercomputers such as computer clusters often run such programs.
 
The principal MPI-1 model has no [[shared memory]] concept, and MPI-2 has only a limited [[distributed shared memory]] concept. Nonetheless, MPI programs are regularly run on shared memory computers, and both [[MPICH]] and [[Open MPI]] can use shared memory for message transfer if it is available.<ref>[http://knem.gforge.inria.fr/ KNEM: High-Performance Intra-Node MPI Communication] "MPICH2 (since release 1.1.1) uses KNEM in the DMA LMT to improve large message performance within a single node. Open MPI also includes KNEM support in its SM BTL component since release 1.5. Additionally, NetPIPE includes a KNEM backend since version 3.7.2."</ref><ref>{{cite web|url=https://www.open-mpi.org/faq/?category=sm|title=FAQ: Tuning the run-time characteristics of MPI sm communications|website=www.open-mpi.org}}</ref> Designing programs around the MPI model (contrary to explicit [[Shared memory (interprocess communication)|shared memory]] models) has advantages over when running on [[Non-Uniform Memory Access|NUMA]] architectures since MPI encourages [[locality of reference|memory locality]]. Explicit shared memory programming was introduced in MPI-3.<ref>https://software.intel.com/en-us/articles/an-introduction-to-mpi-3-shared-memory-programming?language=en "The MPI-3 standard introduces another approach to hybrid programming that uses the new MPI Shared Memory (SHM) model"</ref><ref>[http://insidehpc.com/2016/01/shared-memory-mpi-3-0/ Shared Memory and MPI 3.0] "Various benchmarks can be run to determine which method is best for a particular application, whether using MPI + OpenMP or the MPI SHM extensions. On a fairly simple test case, speedups over a base version that used point to point communication were up to 5X, depending on the message."</ref><ref>[http://www.caam.rice.edu/~mk51/presentations/SIAMPP2016_4.pdf Using MPI-3 Shared Memory As a Multicore Programming System] (PDF presentation slides)</ref>
 
Although MPI belongs in layers 5 and higher of the [[OSI Reference Model]], implementations may cover most layers, with [[Internet socket|sockets]] and [[Transmission Control Protocol]] (TCP) used in the transport layer.