Message Passing Interface: Difference between revisions

Content deleted Content added
No edit summary
Added an "implementations" section with a Python stub and an OCAML stub.
Line 1:
{{expand}}
The '''Message Passing Interface''' ('''MPI''') is a [[computer]] communications [[protocol (computing)|protocol]]. It is a ''[[de facto]]'' [[standardization|standard]] for [[communication]] among the nodes running a [[parallel programming|parallel program]] on a [[Distributed memory|distributed memory system]]. MPI implementations consist of a library of routines that can be called from [[Fortran]], [[C programming language|C]], [[C++]] and [[Ada programming language|Ada]] programs. The advantage of MPI over older message passing libraries is that it is both portable (because MPI has been implemented for almost every distributed memory architecture) and fast (because each [[implementation]] is optimized for the [[hardware]] on which it runs). Often compared with PVM and at one stage merged with [[Parallel Virtual Machine|PVM]] to form PVMMPI.
 
==Implementations==
 
===[[Python (programming language)|Python]]===
There are at least three known attempts to implement MPI for Python: [http://datamining.anu.edu.au/~ole/pypar/ PyPar], [http://pympi.sourceforge.net/ PyMPI], and [http://starship.python.net/~hinsen/ScientificPython/ The MPI submodule of ScientificPython]. PyPar (and possibly ScientificPython's module as well?) is designed to work like a typical module used with nothing but an import statement (and covers a subset of the spec) while PyMPI is ''a variant python interpreter'' which implements more of the spec and automagically works with compiled code that needs to make MPI calls. [https://geodoc.uchicago.edu/climatewiki/DiscussPythonMPI Source]
 
===[[OCaml]]===
The [http://cristal.inria.fr/~xleroy/software.html#ocamlmpi OCamlMPI Module] implements a large subset of MPI functions and is in active use in scientific computing. To get a sense of it's maturity: [http://caml.inria.fr/pub/ml-archives/caml-list/2003/07/155910c4eeb09e684f02ea4ae342873b.en.html it was reported on caml-list] that an eleven thousand line OCaml program was "MPI-ified", using the module, with an additional 500 lines of code and slight restructuring and has run with excellent results on up to 170 nodes in a supercomputer.
 
==Example program==