Content deleted Content added
→Derived data types: MOS:YOU |
m →History: PVM link |
||
Line 15:
MPI provides parallel hardware vendors with a clearly defined base set of routines that can be efficiently implemented. As a result, hardware vendors can build upon this collection of standard [[Low-level programming language|low-level]] routines to create [[High-level programming language|higher-level]] routines for the distributed-memory communication environment supplied with their [[parallel machine]]s. MPI provides a simple-to-use portable interface for the basic user, yet one powerful enough to allow programmers to use the high-performance message passing operations available on advanced machines.
In an effort to create a universal standard for message passing, researchers did not base it off of a single system but it incorporated the most useful features of several systems, including those designed by IBM, [[Intel]], [[nCUBE]], [[Parallel Virtual Machine|PVM]], Express, P4 and PARMACS. The message-passing paradigm is attractive because of wide portability and can be used in communication for distributed-memory and shared-memory multiprocessors, networks of workstations, and a combination of these elements. The paradigm can apply in multiple settings, independent of network speed or memory architecture.
Support for MPI meetings came in part from [[DARPA]] and from the U.S. [[National Science Foundation]] (NSF) under grant ASC-9310330, NSF Science and Technology Center Cooperative agreement number CCR-8809615, and from the [[European Commission]] through Esprit Project P6643. The [[University of Tennessee]] also made financial contributions to the MPI Forum.
|