This article or section is in a state of significant expansion or restructuring, and is not yet ready for use. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template. If you are the editor who added this template and you are actively editing, please be sure to replace this template with {{in use}} during the active editing session. Click on the link for template parameters to use.
This article was last edited by JamieHanlon (talk | contribs) 14 years ago. (Update timer) |
A parallel programming model is a concept that bridges the gap between hardware and programming languages, in order to express parallel algorithms and match applications with the underlying parallel systems. The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.
The value of a programming model is judged by how a range of problems can be simply expressed and execute efficiently on a range of architectures. Consensus on a programming model is important as it enables software expressed within it to be transportable between different architectures. The von Neumann model has served this role in sequential architectures.
Main classifications and paradigms
Process interaction
Shared memory
In a shared memory model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks and semaphores to control concurrent access. Shared memory can be emulated on distributed-memory systems but non-uniform memory access (NUMA) times can come in to play.
Message passing
In a message passing model, parallel tasks exchange data through passing messages to one another. These communications can be asynchronous or synchronous. The Communicating Sequential Processes (CSP) formalisation of message-passing employed communication channels to 'connect' processes, and led to a number of important languages such as Joyce, occam and Erlang.
Implicit
In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with ___domain-specific languages where the concurrency within a problem can be more prescribed.
Problem decomposition
Task parallelism
A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as MIMD/MPMD or MISD.
Data parallelism
A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will divided between memories and worked on locally. Data parallelism is usually classified as SIMD/SPMP.
Implicit
As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the the compiler and/or the runtime is responsible.
Example parallel programming models
This article may be better presented in list format to meet Wikipedia's quality standards. (February 2010) |
Models
- Algorithmic Skeletons
- Components
- Distributed Objects
- Remote Method Invocation
- Workflows
Libraries
Languages
- Ada
- Ateji PX
- C*
- Cilk
- Charm++
- Partitioned global address space languages:
- Co-array Fortran,
- Unified Parallel C,
- Titanium
- High Performance Fortran
- Haskell
- Occam
- Event-driven programming & Hardware Description Languages:
- Ease
- Erlang
- Linda coordination language
- Oz
- CUDA
- OpenCL
- Jacket
- NESL
- Scala
Unsorted
- OpenMP
- Global Arrays
- Intel Ct
- Pervasive DataRush
- ProActive
- Parallel Random Access Machine
- Stream processing
- Structural Object Programming Model (SOPM)
- Pipelining
- ZPL
Other research-level models are:
See also
References
- H. Shan and J. Pal Singh. A comparison of MPI, SHMEM, and Cache-Coherent Shared Address Space Programming Models on a Tightly-Coupled Multiprocessor. International Journal of Parallel Programming, 29(3), 2001.
- H. Shan and J. Pal Singh. Comparison of Three Programming Models for Adaptive Applications on the Origin 2000. Journal of Parallel and Distributed Computing, 62:241–266, 2002.
- About structured parallel programming: Davide Pasetto and Marco Vanneschi. Machine independent Analytical models for cost evaluation of template--based programs, University of Pisa, 1996
External links
- Developing Parallel Programs — A Discussion of Popular Models (Oracle White Paper September 2010)
- Designing and Building Parallel Programs (Section 1.3, 'A Parallel Programming Model')
- Introduction to Parallel Computing (Section 'Parallel Programming Models')