Message Passing Interface: Difference between revisions

Content deleted Content added
m Replaced 4 bare URLs by {{Cite web}}; Replaced "Archived copy" by actual titles
Bender the Bot (talk | contribs)
m Python: HTTP to HTTPS for SourceForge
 
(2 intermediate revisions by 2 users not shown)
Line 163:
The parallel I/O feature is sometimes called MPI-IO,<ref name="Gropp99adv-pp5-6">{{harvnb |Gropp |Lusk |Skjelling |1999b |pp=5–6 }}</ref> and refers to a set of functions designed to abstract I/O management on distributed systems to MPI, and allow files to be easily accessed in a patterned way using the existing derived datatype functionality.
 
The little research that has been done on this feature indicates that it may not be trivial to get high performance gains by using MPI-IO. For example, an implementation of sparse [[Matrix multiplication|matrix-vector multiplications]] using the MPI I/O library shows a general behavior of minor performance gain, but these results are inconclusive.<ref>{{citeCite web|url=http://marcovan.hulten.org/report.pdf|title=Sparse matrix-vector multiplications using the MPI I/O library}}</ref> It was not
until the idea of collective I/O<ref>{{cite web|title=Data Sieving and Collective I/O in ROMIO|url=http://www.mcs.anl.gov/~thakur/papers/romio-coll.pdf|publisher=IEEE|date=Feb 1999}}</ref> implemented into MPI-IO that MPI-IO started to reach widespread adoption. Collective I/O substantially boosts applications' I/O bandwidth by having processes collectively transform the small and noncontiguous I/O operations into large and contiguous ones, thereby reducing the [[Record locking|locking]] and disk seek overhead. Due to its vast performance benefits, MPI-IO also became the underlying I/O layer for many state-of-the-art I/O libraries, such as [[HDF5]] and [[NetCDF|Parallel NetCDF]]. Its popularity also triggered research on collective I/O optimizations, such as layout-aware I/O<ref>{{cite book|chapter=LACIO: A New Collective I/O Strategy for Parallel I/O Systems|publisher=IEEE|date=Sep 2011|doi=10.1109/IPDPS.2011.79|isbn=978-1-61284-372-8|citeseerx=10.1.1.699.8972|title=2011 IEEE International Parallel & Distributed Processing Symposium|last1=Chen|first1=Yong|last2=Sun|first2=Xian-He|last3=Thakur|first3=Rajeev|last4=Roth|first4=Philip C.|last5=Gropp|first5=William D.|pages=794–804|s2cid=7110094}}</ref> and cross-file aggregation.<ref>{{cite journal|author1=Teng Wang|author2=Kevin Vasko|author3=Zhuo Liu|author4=Hui Chen|author5=Weikuan Yu|title=Enhance parallel input/output with cross-bundle aggregation|journal=The International Journal of High Performance Computing Applications|volume=30|issue=2|pages=241–256|date=2016|doi=10.1177/1094342015618017|s2cid=12067366}}</ref><ref>{{cite book|chapter=BPAR: A Bundle-Based Parallel Aggregation Framework for Decoupled I/O Execution|publisher=IEEE|date=Nov 2014|doi=10.1109/DISCS.2014.6|isbn=978-1-4673-6750-9|title=2014 International Workshop on Data Intensive Scalable Computing Systems|last1=Wang|first1=Teng|last2=Vasko|first2=Kevin|last3=Liu|first3=Zhuo|last4=Chen|first4=Hui|last5=Yu|first5=Weikuan|pages=25–32|s2cid=2402391}}</ref>
 
Line 189:
 
===Common Language Infrastructure===
The two managed [[Common Language Infrastructure]] [[.NET Framework|.NET]] implementations are Pure Mpi.NET<ref>{{Cite web|url=http://www.purempi.net/|title=移住の際は空き家バンクと自治体の支援制度を利用しよう Pure- Mpi.NET]あいち移住ナビ|date=June 30, 2024}}</ref> and MPI.NET,<ref>{{cite web|url=http://www.osl.iu.edu/research/mpi.net/|title=MPI.NET: High-Performance C# Library for Message Passing|website=www.osl.iu.edu}}</ref> a research effort at [[Indiana University]] licensed under a [[BSD]]-style license. It is compatible with [[Mono (software)|Mono]], and can make full use of underlying low-latency MPI network fabrics.
The two managed [[Common Language Infrastructure]] [[.NET Framework|.NET]] implementations are Pure Mpi.NET<ref>
[http://www.purempi.net Pure Mpi.NET]</ref> and MPI.NET,<ref>{{cite web|url=http://www.osl.iu.edu/research/mpi.net/|title=MPI.NET: High-Performance C# Library for Message Passing|website=www.osl.iu.edu}}</ref> a research effort at [[Indiana University]] licensed under a [[BSD]]-style license. It is compatible with [[Mono (software)|Mono]], and can make full use of underlying low-latency MPI network fabrics.
 
===Java===
Line 215 ⟶ 214:
 
===Python===
Actively maintained MPI wrappers for [[Python (programming language)|Python]] include: mpi4py,<ref>{{citeCite web|url=https://mpi4py.readthedocs.io/en/stable/|title=MPI for Python — MPI for Python 4.1.0 documentation|website=mpi4py.readthedocs.io}}</ref> numba-mpi<ref>{{citeCite web|url=https://pypi.org/pproject/numba-mpi/|title=numba-mpiClient Challenge|website=pypi.org}}</ref> and numba-jax.<ref>{{citeCite web|url=https://mpi4jax.readthedocs.io/en/latest/|title=mpi4jax — mpi4jax documentation|website=mpi4jax.readthedocs.io}}</ref>
 
Discontinued developments include: pyMPI, pypar,<ref>{{cite web|url=https://code.google.com/p/pypar/|title=Google Code Archive - Long-term storage for Google Code Project Hosting.|website=code.google.com}}</ref> MYMPI<ref>Now part of [httphttps://sourceforge.net/projects/pydusa/ Pydusa]</ref> and the MPI submodule in [[ScientificPython]].
 
===R===
Line 312 ⟶ 311:
 
==Future==
Some aspects of the MPI's future appear solid; others less so. The MPI Forum reconvened in 2007 to clarify some MPI-2 issues and explore developments for a possible MPI-3, which resulted in versions MPI-3.0 (September 2012)<ref>{{Cite web| title=MPI: A Message-Passing Interface Standard Version 3.0 - Message Passing Interface Forum | url=https://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf |{{Bare archive-url=https://web.archive.org/web/20130319193248/http://www.mpi-forum.org:80/docs/mpi-3.0/mpi30-report.pdfURL PDF| archive-date=2013-03-19July 2025}}</ref> and MPI-3.1 (June 2015).<ref>{{Cite web| title=MPI: A Message-Passing Interface Standard Version 3.1 - Message Passing Interface Forum | url=https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf |{{Bare archive-url=https://web.archive.org/web/20150706095015/http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdfURL PDF| archive-date=2015-07-06July 2025}}</ref> The development continued with the approval of MPI-4.0 on June 9, 2021,<ref>{{Cite web| title=MPI: A Message-Passing Interface Standard Version 4.0 - Message Passing Interface Forum | url=https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdf |{{Bare archive-url=https://web.archive.org/web/20210628174829/https://www.mpi-forum.org/docs/mpi-4.0/mpi40-report.pdfURL PDF| archive-date=2021-06-28July 2025}}</ref> and most recently, MPI-4.1 was approved on November 2, 2023.<ref>{{Cite web| title=MPI: A Message-Passing Interface Standard Version 4.1 - Message Passing Interface Forum | url=https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report.pdf |{{Bare archive-url=https://web.archive.org/web/20231115151248/https://www.mpi-forum.org/docs/mpi-4.1/mpi41-report.pdfURL PDF| archive-date=2023-11-15July 2025}}</ref>
 
Architectures are changing, with greater internal concurrency ([[Multi-core processor|multi-core]]), better fine-grained concurrency control (threading, affinity), and more levels of [[memory hierarchy]]. [[Multithreading (computer architecture)|Multithreaded]] programs can take advantage of these developments more easily than single-threaded applications. This has already yielded separate, complementary standards for [[symmetric multiprocessing]], namely [[OpenMP]]. MPI-2 defines how standard-conforming implementations should deal with multithreaded issues, but does not require that implementations be multithreaded, or even thread-safe. MPI-3 adds the ability to use shared-memory parallelism within a node. Implementations of MPI such as Adaptive MPI, Hybrid MPI, Fine-Grained MPI, MPC and others offer extensions to the MPI standard that address different challenges in MPI.
Line 370 ⟶ 369:
|title=A High-Performance, Portable Implementation of the MPI Message Passing Interface
|journal=Parallel Computing |volume=22 |issue=6 |pages=789–828 |doi=10.1016/0167-8191(96)00024-5 }}
* Pacheco, Peter S. (1997) ''[https://books.google.com/books?&id=tCVkM1z2aOoC Parallel Programming with MPI]''.[http://www.cs.usfca.edu/mpi/ Parallel Programming with MPI] 500 pp. Morgan Kaufmann {{ISBN|1-55860-339-5}}.
* ''MPI—The Complete Reference'' series:
** Snir, Marc; Otto, Steve W.; Huss-Lederman, Steven; Walker, David W.; Dongarra, Jack J. (1995) ''[http://www.netlib.org/utk/papers/mpi-book/mpi-book.html MPI: The Complete Reference]''. MIT Press Cambridge, MA, USA. {{ISBN|0-262-69215-5}}