Content deleted Content added
Changing short description from "Abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs" to "Abstraction of parallel computer architecture" (Shortdesc helper) |
Danbonachea (talk | contribs) Add Partitioned global address space models to the taxonomy |
||
Line 18:
{{main|Message passing}}
In a message-passing model, parallel processes exchange data through passing messages to one another. These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. The [[Communicating sequential processes]] (CSP) formalisation of message passing uses synchronous communication channels to connect processes, and led to important languages such as [[Occam (programming language)|Occam]], [[Limbo (programming language)|Limbo]] and [[Go (programming language)|Go]]. In contrast, the [[actor model]] uses asynchronous message passing and has been employed in the design of languages such as [[D (programming language)|D]], [[Scala (programming language)|Scala]] and SALSA.
====Partitioned global address space====
{{main|Partitioned global address space}}
Partitioned Global Address Space (PGAS) models provide a middle ground between shared memory and message passing. PGAS provides a global memory address space abstraction that is logically partitioned, where a portion is local to each process. Parallel processes communicate by asynchronously performing operations (e.g. reads and writes) on the global address space, in a manner reminiscent of shared memory models. However by semantically partitioning the global address space into portions with affinity to a particular processes, they allow programmers to exploit [[locality of reference]] and enable efficient implementation on [[distributed memory]] parallel computers. PGAS is offered by many many parallel programming languages and libraries, such as [[Fortran 2008]], [[Chapel (programming language)|Chapel]], [http://upcxx.lbl.gov UPC++], and [[SHMEM]].
====Implicit interaction====
Line 86 ⟶ 90:
| Data
| [[Cilk (programming language)|Cilk]], [[CUDA]], [[OpenMP]], [[Threading Building Blocks]], [[XMTC]]
|-
| [[SPMD]] [[Partitioned global address space|PGAS]]
| Partitioned global address space
| Data
| [[Fortran 2008]], [[Unified Parallel C]], [http://upcxx.lbl.gov UPC++], [[SHMEM]]
|-
| Global-view [[Task parallelism]]
| Partitioned global address space
| Task
| [[Chapel (programming language)|Chapel]], [[X10 (programming language)|X10]]
|}
|