==Classification of parallel programming models==
Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.<ref>John E. Savage, Models of Computation: Exploring the Power of Computing, 2008, Chapter 7 (Parallel Computation), http://http://cs.brown.edu/~jes/book/</ref><ref>Ian Foster, Designing and Building Parallel Programs, 1995, Section 1.3, "A Parallel Programming Model", http://www.mcs.anl.gov/~itf/dbpp/text/node9.html</ref><ref>Blaise Barney, Introduction to Parallel Computing, "Models", 2015, Lawrence Livermore National Laboratory,
https://computing.llnl.gov/tutorials/parallel_comp/#Models</ref>
====Shared memory====
{{main|Shared memory (interprocess communication)}}
Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to [[Racerace condition|race conditions]]s and mechanisms such as [[Lock (computer science)|locks]], [[Semaphore (programming)|semaphores]] and [[Monitor (synchronization)|monitors]] can be used to avoid these. Conventional [[Multimulti-core processor|multi-core processors]]s directly support shared memory, which many parallel programming languages and libraries, such as [[Cilk (programming language)|Cilk]], [[OpenMP]] and [[Threading Building Blocks]], are designed to exploit.
====Message passing====
====Implicit interaction====
{{main|Implicit parallelism}}
In an implicit model, no process interaction is visible to the programmer and instead the compiler and/or runtime is responsible for performing it. Two examples of implicit parallelism are with [[___domain-specific language|___domain-specific languages]]s where the concurrency within high-level operations is prescribed, and with [[functional programming|functional programming languages]] because the absence of [[Side effect (computer science)|side-effects]] allows non-dependent functions to be executed in parallel.<ref name="ParFuncProg">Hammond, Kevin. Parallel functional programming: An introduction. In International Symposium on Parallel Symbolic Computation, p. 46. 1994.</ref>. However, this kind of parallelism is difficult to manage<ref>McBurney, D. L., and M. Ronan Sleep. "Transputer-based experiments with the ZAPP architecture." PARLE Parallel Architectures and Languages Europe. Springer Berlin Heidelberg, 1987.</ref> and functional languages such as [[Concurrent Haskell]] and [[Concurrent ML]] provide features to manage parallelism explicitly.
===Problem decomposition===
|