Content deleted Content added
→Overview: MOS:RELTIME |
→Functionality: clarify |
||
Line 56:
The MPI interface is meant to provide essential virtual topology, [[synchronization]], and communication functionality between a set of processes (that have been mapped to nodes/servers/computer instances) in a language-independent way, with language-specific syntax (bindings), plus a few language-specific features. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, each [[CPU]] (or [[multi-core (computing)|core]] in a multi-core machine) will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called mpirun or mpiexec.
MPI library functions include, but are not limited to, point-to-point rendezvous-type send/receive operations, choosing between a [[Cartesian tree|Cartesian]] or [[Graph (data structure)|graph]]-like logical process topology, exchanging data between process pairs (send/receive operations), combining partial results of computations (gather and reduce operations), synchronizing nodes (barrier operation) as well as obtaining network-related information such as the number of processes in the computing session, current processor identity that a process is mapped to, neighboring processes accessible in a logical topology, and so on. Point-to-point operations come in [[synchronization (computer science)|synchronous]], [[asynchronous i/o|asynchronous]], buffered, and ''ready'' forms, to allow both relatively stronger and weaker [[semantics]] for the synchronization aspects of a rendezvous-send. Many
MPI-1 and MPI-2 both enable implementations that overlap communication and computation, but practice and theory differ. MPI also specifies ''[[thread safe]]'' interfaces, which have [[cohesion (computer science)|cohesion]] and [[coupling (computer science)|coupling]] strategies that help avoid hidden state within the interface. It is relatively easy to write multithreaded point-to-point MPI code, and some implementations support such code. [[Multithreading (computer architecture)|Multithreaded]] collective communication is best accomplished with multiple copies of Communicators, as described below.
|