Parallel programming model: Difference between revisions

Content deleted Content added
External links
Reworked introduction and added a skeleton for classifications of parallel models.
Line 1:
{{Under construction|notready=true}}
{{Prose|date=October 2008}}
A '''parallel programming model''' is a set of [[software|software technologies]] to express [[parallel algorithm]]s and match applications with the underlying parallel systems. It encloses the areas of [[Application software|applications]], [[programming language]]s, [[compiler]]s, [[library (computer science)|libraries]], [[communications system]]s, and [[parallel I/O]]. Due to the difficulties in [[automatic parallelization]] today, people have to choose a proper parallel programming model or a form of mixture of them to develop their parallel applications on a particular platform.
 
A '''parallel programming model''' is a concept that bridges the gap between hardware and programming languages, in order to express [[parallel algorithm]]s and match applications with the underlying parallel systems.
Parallel models are implemented in several ways: as libraries invoked from traditional sequential languages, as language extensions, or complete new execution models. They are also roughly categorized for two kinds of systems: [[Shared memory|shared-memory]] system and [[Distributed memory|distributed-memory]] system, though the lines between them are largely blurred nowadays.
The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.
 
AConsensus on a programming model is usuallyimportant judgedas byit itsenables expressibilitysoftware andexpressed simplicity,within whichit areto bybe alltransportable meansbetween conflictingdifferent factorsarchitectures. The ultimateworth goalof a programming model is judged by how simply it is able to improveexpress productivitya range of programmingproblems.
 
==Main classifications and paradigms==
 
===Process interaction===
 
====Shared memory====
{{Main|Shared memory}}
 
In a shared memory model, parallel tasks share a global address space which they read and write to asynchronously. This requires protection mechanisms such as locks and semaphores to control concurrent access. Shared memory can be
emulated on distributed-memory systems but non-uniform memory access (NUMA) times can come in to play.
 
====Message passing====
{{Main|Message passing}}
 
In a message passing model, parallel tasks exchange data through passing messages to one another. These communications can be asynchronous or synchronous. The Communicating Sequential Processes (CSP) formalisation of
message-passing employed communication channels to 'connect' processes, and led to a number of important languages such as Joyce, occam and Erlang.
 
====Implicit====
*[[Degree of{{Main|Implicit parallelism]]}}
 
In an implicit model, no process interaction is visible to the programmer, instead the compiler and/or runtime is responsible for performing it. This is most common with ___domain-specific languages where the concurrency within a problem can be more prescribed.
 
===Problem decomposition===
 
====Task parallelism====
{{Main|Task parallelism}}
 
A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. It is usually classified as MIMD/MPMD or MISD.
 
====Data parallelism====
{{Main|Data parallelism}}
 
A data-parallel model focuses on performing operations on a data set which is usually regularly structured in an array. A set of tasks will operate on this data, but independently on separate partitions. In a shared memory system, the data will be accessible to all, but in a distributed-memory system it will divided between memories and worked on locally. Data parallelism is usually classified as SIMD/SPMP.
 
====Implicit====
{{Main|Implicit parallelism}}
 
As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the the compiler and/or the runtime is responsible.
 
== Example parallel programming models==
Line 69 ⟶ 108:
 
==See also ==
* [[Bridging model]]
*[[Automatic parallelization]]
* [[Concurrency (Computer science)|Concurrency]]
*[[Degree of parallelism]]
* [[Automatic parallelization]]
* [[Degree of parallelism]]
* [[Partitioned global address space]]