Content deleted Content added
m v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation) |
|||
Line 9:
==History==
The message passing interface effort began in the summer of 1991 when a small group of researchers started discussions at a mountain retreat in Austria. Out of that discussion came a Workshop on Standards for Message Passing in a Distributed Memory Environment, held on April 29–30, 1992 in [[Williamsburg, Virginia]].<ref>{{cite report |id= ORNL/TM-12147 |osti= 10170156 |author= Walker DW |date= August 1992 |title= Standards for message-passing in a distributed memory environment |url= https://technicalreports.ornl.gov/1992/3445603661204.pdf |institution= Oak Ridge National Lab., TN (United States), Center for Research on Parallel Computing (CRPC) |pages= 25 |access-date= 2019-08-18 }}</ref> Attendees at Williamsburg discussed the basic features essential to a standard message-passing interface and established a working group to continue the standardization process. [[Jack Dongarra]], [[Tony Hey]], and David W. Walker put forward a preliminary draft proposal, "MPI1", in November 1992. In November 1992 a meeting of the MPI working group took place in Minneapolis and decided to place the standardization process on a more formal footing. The MPI working group met every 6 weeks throughout the first 9 months of 1993. The draft MPI standard was presented at the Supercomputing '93 conference in November 1993.<ref>{{cite conference |title= MPI: A Message Passing Interface |author= The MPI Forum, CORPORATE |date= November 15–19, 1993 |conference= Supercomputing '93 |conference-url= http://supercomputing.org/ |book-title= Proceedings of the 1993 ACM/IEEE conference on Supercomputing |publisher= ACM |___location= Portland, Oregon, USA |pages= 878–883 |isbn= 0-8186-4340-4 |doi= 10.1145/169627.169855 }}</ref> After a period of public comments, which resulted in some changes in MPI, version 1.0 of MPI was released in June 1994. These meetings and the email discussion together constituted the MPI Forum, membership of which has been open to all members of the [[High-performance computing
The MPI effort involved about 80 people from 40 organizations, mainly in the United States and Europe. Most of the major vendors of [[concurrent computer]]s were involved in the MPI effort, collaborating with researchers from universities, government laboratories, and [[Private industry|industry]].
Line 108:
* <code>disp</code> contains byte displacements of each block,
* <code>type</code> contains types of element in each block.
* <code>newtype</code> (an output) contains the new derived type created by this function
The <code>disp</code> (displacements) array is needed for [[data structure alignment]], since the compiler may pad the variables in a class or data structure. The safest way to find the distance between different fields is by obtaining their addresses in memory. This is done with <code>MPI_Get_address</code>, which is normally the same as C's <code>&</code> operator but that might not be true when dealing with [[memory segmentation]].<ref>{{cite web|url=http://www.mpich.org/static/docs/v3.1/www3/MPI_Get_address.html|title=MPI_Get_address|website=www.mpich.org}}</ref>
Line 167:
* [[Open MPI]] (not to be confused with [[OpenMP]]) was formed by the merging FT-MPI, LA-MPI, [[LAM/MPI]], and PACX-MPI, and is found in many [[TOP-500]] [[supercomputer]]s.
Many other efforts are derivatives of MPICH, LAM, and other works, including, but not limited to, commercial implementations from [[
While the specifications mandate a C and Fortran interface, the language used to implement MPI is not constrained to match the language or languages it seeks to support at runtime. Most implementations combine C, C++ and assembly language, and target C, C++, and Fortran programmers. Bindings are available for many other languages, including Perl, Python, R, Ruby, Java, and [[Control Language|CL]] (see [[#Language bindings]]).
Line 182:
==Language bindings==
[[Language binding|Bindings]] are libraries that extend MPI support to other languages by wrapping an existing MPI implementation such as MPICH or Open MPI.
===Common Language Infrastructure===
Line 373:
* Firuziaan, Mohammad; Nommensen, O. (2002) ''Parallel Processing via MPI & OpenMP'', Linux Enterprise, 10/2002
* Vanneschi, Marco (1999) ''Parallel paradigms for scientific computing'' In Proceedings of the European School on Computational Chemistry (1999, Perugia, Italy), number 75 in ''[https://books.google.com/books?&id=zMqVdFgVnrgC Lecture Notes in Chemistry]'', pages 170–183. Springer, 2000
* Bala, Bruck, Cypher, Elustondo, A Ho, CT Ho, Kipnis, Snir (1995) ″[https://ieeexplore.ieee.org/abstract/document/342126/ A portable and tunable collective communication library for scalable parallel computers]" in IEEE Transactions on Parallel and Distributed Systems,″ vol. 6, no. 2, pp.
{{div col end}}
Line 380:
* {{Official website|https://www.mpi-forum.org/}}
*[https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf Official MPI-3.1 standard] ([https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/mpi31-report.htm unofficial HTML version])
* [http://polaris.cs.uiuc.edu/~padua/cs320/mpi/tutorial.pdf Tutorial on MPI: The Message-Passing Interface]
* [http://moss.csc.ncsu.edu/~mueller/cluster/mpi.guide.pdf A User's Guide to MPI]
|