Content deleted Content added
mNo edit summary |
|||
Line 1:
In computer software, a '''parallel programming model''' is a model for writing [[parallel program]]s which can be compiled and executed. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently they execute. The implementation of a programming model can take several forms such as libraries invoked from traditional [[sequential programming|sequential]] languages, language extensions, or complete new execution models.
Consensus around each programming model is important as it enables software expressed within it to be transportable between different architectures.
==Main classifications==
Classifications of parallel programming models can be divided broadly into two areas:
▲Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.
▲===Process interaction===
▲Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but it can also be implicit.
====Shared memory====
{{
Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. In this model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks, semaphores and monitors to control concurrent access. Shared memory can be emulated on physically distributed-memory systems but [[non-uniform memory access]] (NUMA) times can come into play. Sometimes memory is also shared between different section of code of the same program. E.g. A For loop can create threads for each iteration which updates a variable in parallel.▼
▲Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. In this model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks, semaphores and monitors to control concurrent access. Shared memory can be
====Message passing====
Line 21 ⟶ 18:
====Implicit====
{{
In an implicit model, no
▲In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with ___domain-specific languages where the concurrency within a problem can be more prescribed.
===Problem decomposition===
A parallel program is composed of simultaneously executing
▲{{Flynn's Taxonomy}}
▲A parallel program is composed of simultaneously executing processes. Problem decomposition relates to the way in which these processes are formulated. This classification may also be referred to as [[algorithmic skeleton]]s or parallel programming paradigms.
====Task parallelism====
{{
A task-parallel model focuses on
▲A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as [[MIMD]]/[[Flynn's taxonomy#MPMD|MPMD]] or [[MISD]].
====Data parallelism====
{{
A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will be divided between memories and worked on locally. Data parallelism is usually classified as [[MIMD]]/[[SPMD]] or [[SIMD]].
====
The systems are categorized into two categories.{{citationneeded|date=April 2014}} The systems discussed in the first category were characterized by the isolation of the abstract design space seen by the programmer from the parallel, distributed implementation. In this, all
▲The systems are categorized into two categories.{{citationneeded|date=April 2014}} The systems discussed in the first category were characterized by the isolation of the abstract design space seen by the programmer from the parallel, distributed implementation. In this, all processes are presented with equal access to some kind of shared memory space. In its loosest form, any process may attempt to access any item at any time.
The second category considers machines in which the two levels are closer together and in particular, those in which the programmer's world includes explicit parallelism.This category discards shared memory based cooperation in favour of some form of explicit message passing.
==
* [[Algorithmic skeleton|Algorithmic Skeletons]]
* Components
Line 66 ⟶ 56:
==References==
{{
==Further reading==
Line 72 ⟶ 62:
* H. Shan and J. Pal Singh. Comparison of Three Programming Models for Adaptive Applications on the Origin 2000. Journal of Parallel and Distributed Computing, 62:241–266, 2002.
* About [[structured parallel programming]]: Davide Pasetto an [[Marco Vanneschi]]. ''[http://portal.acm.org/citation.cfm?id=898142&coll=&dl=GUIDE&CFID=15151515&CFTOKEN=6184618 Machine independent Analytical models for cost evaluation of template--based programs]'', [[University of Pisa]], 1996
* {{Citation |
*Murray I. Cole. Algorithmic Skeletons:Structured Management of Parallel Computation
|