Content deleted Content added
JamieHanlon (talk | contribs) m Second paragraph of introduction |
JamieHanlon (talk | contribs) m Introduction and some formatting |
||
Line 1:
{{Under construction|notready=true}}
A '''parallel programming model''' is a concept that enables the expression of parallel programs which can be compiled and executed. The value of a programming model is usually judged on its generality: how well a range of different problems can be expressed and how well they execute on a range of different architectures. The implementation of a programming model can take several forms such as libraries invoked from traditional sequential languages, language extensions, or complete new execution models.
Consensus on a particular programming model is important as it enables software expressed within it to be transportable between different architectures. The [[von Neumann model]] has facilitated this with sequential architectures as it provides an efficient ''bridge'' between hardware and software, meaning that high-level languages can be efficiently compiled to it and it can be efficiently implemented in hardware<ref name="Valiant1990">Leslie G. Valiant, A bridging model for parallel computation, Commun. ACM, volume 33, issue 8, August, 1990, pages 103--111</ref>.
==Main classifications and paradigms==
Line 46 ⟶ 44:
== Example parallel programming models==
{{Create-list|date=February 2010}}
===Models===
* [[Algorithmic skeleton|Algorithmic Skeletons]]
* Components
Line 52 ⟶ 51:
* Remote Method Invocation
* Workflows
===Libraries===
* [[POSIX Threads]]
* [[Message Passing Interface|MPI]]
Line 60:
* [[Intel Threading Building Blocks|TBB]]
* [[Kernel for Adaptative, Asynchronous Parallel and Interactive programming|KAAPI]]
===Languages===
* [[Ada (programming language)|Ada]]
* [[Ateji PX]]
Line 88 ⟶ 89:
* [[Scala (programming language)|Scala]]
===Unsorted===
* [[OpenMP]]
* [[Global Arrays]]
Line 106:
* IBM’s [[X10 (programming language)|X10]]
==See also
* [[Bridging model]]
* [[Concurrency (Computer science)|Concurrency]]
Line 113:
* [[Partitioned global address space]]
==References
{{Reflist}}
==Further reading==
* H. Shan and J. Pal Singh. A comparison of MPI, SHMEM, and Cache-Coherent Shared Address Space Programming Models on a Tightly-Coupled Multiprocessor. International Journal of Parallel Programming, 29(3), 2001.
* H. Shan and J. Pal Singh. Comparison of Three Programming Models for Adaptive Applications on the Origin 2000. Journal of Parallel and Distributed Computing, 62:241–266, 2002.
|