Content deleted Content added
m →Python: HTTP to HTTPS for SourceForge |
|||
(6 intermediate revisions by 5 users not shown) | |||
Line 37:
{{Anchor|VERSIONS}}
At present, the standard has several versions: version 1.3 (commonly abbreviated ''MPI-1''), which emphasizes message passing and has a static runtime environment, MPI-2.2 (MPI-2), which includes new features such as parallel I/O, dynamic process management and remote memory operations,<ref name="Gropp99adv-pp4-5">{{harvnb|Gropp|Lusk|Skjellum|1999b|pp=4–5}}</ref> and MPI-3.1 (MPI-3), which includes extensions to the collective operations with non-blocking versions and extensions to the one-sided operations.<ref name="MPI_3.1">[http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf MPI: A Message-Passing Interface Standard<br />Version 3.1, Message Passing Interface Forum, June 4, 2015]. http://www.mpi-forum.org. Retrieved on 2015-06-16.</ref>
MPI-2's LIS specifies over 500 functions and provides language bindings for ISO [[C (programming language)|C]], ISO [[C++]], and [[Fortran 90]]. Object interoperability was also added to allow easier mixed-language message passing programming. A side-effect of standardizing MPI-2, completed in 1996, was clarifying the MPI-1 standard, creating the MPI-1.2.
Line 161:
===I/O===
The parallel I/O feature is sometimes called MPI-IO,<ref name="Gropp99adv-pp5-6">{{harvnb |Gropp |Lusk |Skjelling |1999b |pp=5–6 }}</ref> and refers to a set of functions designed to abstract I/O management on distributed systems to MPI, and allow files to be easily accessed in a patterned way using the existing derived datatype functionality.
The little research that has been done on this feature indicates that it may not be trivial to get high performance gains by using MPI-IO. For example, an implementation of sparse [[Matrix multiplication|matrix-vector multiplications]] using the MPI I/O library shows a general behavior of minor performance gain, but these results are inconclusive.<ref>{{
until the idea of collective I/O<ref>{{cite web|title=Data Sieving and Collective I/O in ROMIO|url=http://www.mcs.anl.gov/~thakur/papers/romio-coll.pdf|publisher=IEEE|date=Feb 1999}}</ref> implemented into MPI-IO that MPI-IO started to reach widespread adoption. Collective I/O substantially boosts applications' I/O bandwidth by having processes collectively transform the small and noncontiguous I/O operations into large and contiguous ones, thereby reducing the [[Record locking|locking]] and disk seek overhead. Due to its vast performance benefits, MPI-IO also became the underlying I/O layer for many state-of-the-art I/O libraries, such as [[HDF5]] and [[NetCDF|Parallel NetCDF]]. Its popularity also triggered research on collective I/O optimizations, such as layout-aware I/O<ref>{{cite book|chapter=LACIO: A New Collective I/O Strategy for Parallel I/O Systems|publisher=IEEE|date=Sep 2011|doi=10.1109/IPDPS.2011.79|isbn=978-1-61284-372-8|citeseerx=10.1.1.699.8972|title=2011 IEEE International Parallel & Distributed Processing Symposium|last1=Chen|first1=Yong|last2=Sun|first2=Xian-He|last3=Thakur|first3=Rajeev|last4=Roth|first4=Philip C.|last5=Gropp|first5=William D.|pages=794–804|s2cid=7110094}}</ref> and cross-file aggregation.<ref>{{cite journal|author1=Teng Wang|author2=Kevin Vasko|author3=Zhuo Liu|author4=Hui Chen|author5=Weikuan Yu|title=Enhance parallel input/output with cross-bundle aggregation|journal=The International Journal of High Performance Computing Applications|volume=30|issue=2|pages=241–256|date=2016|doi=10.1177/1094342015618017|s2cid=12067366}}</ref><ref>{{cite book|chapter=BPAR: A Bundle-Based Parallel Aggregation Framework for Decoupled I/O Execution|publisher=IEEE|date=Nov 2014|doi=10.1109/DISCS.2014.6|isbn=978-1-4673-6750-9|title=2014 International Workshop on Data Intensive Scalable Computing Systems|last1=Wang|first1=Teng|last2=Vasko|first2=Kevin|last3=Liu|first3=Zhuo|last4=Chen|first4=Hui|last5=Yu|first5=Weikuan|pages=25–32|s2cid=2402391}}</ref>
Line 191 ⟶ 189:
===Common Language Infrastructure===
The two managed [[Common Language Infrastructure]] [[.NET Framework|.NET]] implementations are Pure Mpi.NET<ref>{{Cite web|url=http://www.purempi.net/|title=移住の際は空き家バンクと自治体の支援制度を利用しよう
▲[http://www.purempi.net Pure Mpi.NET]</ref> and MPI.NET,<ref>{{cite web|url=http://www.osl.iu.edu/research/mpi.net/|title=MPI.NET: High-Performance C# Library for Message Passing|website=www.osl.iu.edu}}</ref> a research effort at [[Indiana University]] licensed under a [[BSD]]-style license. It is compatible with [[Mono (software)|Mono]], and can make full use of underlying low-latency MPI network fabrics.
===Java===
Line 217 ⟶ 214:
===Python===
Actively maintained MPI wrappers for [[Python (programming language)|Python]] include: mpi4py,<ref>{{
Discontinued developments include: pyMPI, pypar,<ref>{{cite web|url=https://code.google.com/p/pypar/|title=Google Code Archive - Long-term storage for Google Code Project Hosting.|website=code.google.com}}</ref> MYMPI<ref>Now part of [
===R===
Line 311 ⟶ 308:
# MPI-2 implementations include I/O and dynamic process management, and the size of the middleware is substantially larger. Most sites that use batch scheduling systems cannot support dynamic process management. MPI-2's parallel I/O is well accepted.{{Citation needed|date=January 2011}}
# Many MPI-1.2 programs were developed before MPI-2. Portability concerns initially slowed adoption, although wider support has lessened this.
# Many MPI-1.2 applications use only a subset of that standard (
==Future==
Some aspects of the MPI's future appear solid; others less so. The MPI Forum reconvened in 2007 to clarify some MPI-2 issues and explore developments for a possible MPI-3, which resulted in versions MPI-3.0 (September 2012)<ref>https://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf {{Bare URL PDF|date=July 2025}}</ref> and MPI-3.1 (June 2015).<ref>https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf {{Bare URL PDF|date=July 2025}}</ref> The development continued with the approval of MPI-4.0 on June 9, 2021,<ref>https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf {{Bare URL PDF|date=July 2025}}</ref> and most recently, MPI-4.1 was approved on November 2, 2023.<ref>https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report.pdf {{Bare URL PDF|date=July 2025}}</ref>
Architectures are changing, with greater internal concurrency ([[Multi-core processor|multi-core]]), better fine-grained concurrency control (threading, affinity), and more levels of [[memory hierarchy]]. [[Multithreading (computer architecture)|Multithreaded]] programs can take advantage of these developments more easily than single-threaded applications. This has already yielded separate, complementary standards for [[symmetric multiprocessing]], namely [[OpenMP]]. MPI-2 defines how standard-conforming implementations should deal with multithreaded issues, but does not require that implementations be multithreaded, or even thread-safe. MPI-3 adds the ability to use shared-memory parallelism within a node. Implementations of MPI such as Adaptive MPI, Hybrid MPI, Fine-Grained MPI, MPC and others offer extensions to the MPI standard that address different challenges in MPI.
Line 372 ⟶ 369:
|title=A High-Performance, Portable Implementation of the MPI Message Passing Interface
|journal=Parallel Computing |volume=22 |issue=6 |pages=789–828 |doi=10.1016/0167-8191(96)00024-5 }}
* Pacheco, Peter S. (1997) ''[https://books.google.com/books?&id=tCVkM1z2aOoC Parallel Programming with MPI]''.[http://www.cs.usfca.edu/mpi/ Parallel Programming with MPI] 500 pp. Morgan Kaufmann {{ISBN|1-55860-339-5}}.
* ''MPI—The Complete Reference'' series:
** Snir, Marc; Otto, Steve W.; Huss-Lederman, Steven; Walker, David W.; Dongarra, Jack J. (1995) ''[http://www.netlib.org/utk/papers/mpi-book/mpi-book.html MPI: The Complete Reference]''. MIT Press Cambridge, MA, USA. {{ISBN|0-262-69215-5}}
|