Parallel programming model

This is an old revision of this page, as edited by JamieHanlon (talk | contribs) at 21:17, 8 February 2011 (Second paragraph of introduction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A parallel programming model is a concept that bridges the gap between hardware and programming languages, in order to express parallel algorithms and match applications with the underlying parallel systems. The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.

The value of a programming model is judged by how a range of problems can be simply expressed and execute efficiently on a range of architectures. Consensus on a programming model is important as it enables software expressed within it to be transportable between different architectures. The von Neumann model has served this role in sequential architectures.

Main classifications and paradigms

Process interaction

Shared memory

In a shared memory model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks and semaphores to control concurrent access. Shared memory can be emulated on distributed-memory systems but non-uniform memory access (NUMA) times can come in to play.

Message passing

In a message passing model, parallel tasks exchange data through passing messages to one another. These communications can be asynchronous or synchronous. The Communicating Sequential Processes (CSP) formalisation of message-passing employed communication channels to 'connect' processes, and led to a number of important languages such as Joyce, occam and Erlang.

Implicit

In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with ___domain-specific languages where the concurrency within a problem can be more prescribed.

Problem decomposition

Task parallelism

A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as MIMD/MPMD or MISD.

Data parallelism

A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will divided between memories and worked on locally. Data parallelism is usually classified as SIMD/SPMP.

Implicit

As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the the compiler and/or the runtime is responsible.

Example parallel programming models

Models

Libraries

Languages

Unsorted

Other research-level models are:

See also

References