Message Passing Interface: Difference between revisions

Content deleted Content added
Removing link(s) to "PyMPI": Removing links to deleted page PyMPI.
Overview: correct the date by looking at the source
Line 24:
MPI is a [[communication protocol]] for programming<ref>{{cite book |first=Frank |last=Nielsen | title=Introduction to HPC with MPI for Data Science | year=2016 | publisher=Springer |isbn=978-3-319-21903-5 |pages=195–211
|chapter=2. Introduction to MPI: The MessagePassing Interface | url=https://franknielsen.github.io/HPC4DS/index.html
|chapter-url=https://www.researchgate.net/publication/314626214 }}</ref> [[parallel computers]]. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation."<ref>{{harvnb |Gropp |Lusk |Skjellum |1996 |p=3 }}</ref> MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in [[high-performance computing]] today.<ref>{{cite book|pages = 105|first1=Sayantan|last1=Sur|first2=Matthew J.|last2=Koop|first3=Dhabaleswar K.|last3=Panda| title=Proceedings of the 2006 ACM/IEEE conference on Supercomputing - SC '06 | chapter=MPI and communication---High-performance and scalable MPI over InfiniBand with reduced memory usage: An in-depth performance analysis |date=411 AugustNovember 20172006|publisher=ACM|doi=10.1145/1188455.1188565|isbn = 978-0769527000|s2cid = 818662}}</ref>
 
MPI is not sanctioned by any major standards body; nevertheless, it has become a [[de facto standard|''de facto'' standard]] for [[communication]] among processes that model a [[parallel programming|parallel program]] running on a [[distributed memory]] system. Actual distributed memory supercomputers such as computer clusters often run such programs.