Explicit parallelism: Difference between revisions

Content deleted Content added
m Reverting sockpuppet.
No edit summary
 
(8 intermediate revisions by one other user not shown)
Line 1:
{{short description |Parallelism expressed within computations}}
{{Unreferenced|date=December 2009}}
{{Use American English |date=February 2024}}
In [[computer programming]], '''explicit parallelism''' is the representation
{{Use mdy dates |date=February 2024}}
of concurrent computations by means of primitives
{{More citations needed |date=February 2024}}
in the form of special-purpose directives or function calls. Most parallel primitives are related to process synchronization, communication or task partitioning. As they seldom contribute to actually carry out the
In [[computer programming]], '''explicit parallelism''' is the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.<ref name="pra11" /> Most parallel primitives are related to process synchronization, communication and process partitioning.<ref name="dij68" /> As they rarely contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.
intended computation of the program, their computational cost is often considered
as [[parallelization overhead]].
 
The advantage of explicit [[parallel programming]] is increased programmer control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.
The advantage of explicit [[parallel programming]] is the absolute programmer
control over the parallel execution. A skilled
parallel programmer takes advantage of explicit parallelism to produce
very efficient code. However, programming with explicit parallelism is often difficult, especially for
non computing specialists, because of the extra work involved in planning
the task division and synchronization of concurrent processes.
 
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically extractsdeduces the parallelism inherent to computations, known (seeas [[implicit parallelism]]).
 
==Programming withlanguages that support explicit parallelism==
Some of the programming languages that support explicit parallelism are:
*[[Occam (programming language)]]
*[[ErlangAda (programming language)|Ada]]
*[[Ease programming language|Ease]]
*[[Erlang (programming language)|Erlang]]
*[[Java (programming language)|Java]]
*[[JavaSpaces]]
*[[Message Passing Interface]]
*[[Occam (programming language)|Occam]]
*[[Parallel Virtual Machine]]
*[[Ease programming language]]
*[[Ada programming language]]
*[[Java (programming language)|Java programming language]]
*[[JavaSpaces]]
 
== References ==
{{Parallel Computing}}
{{reflist |refs=
<ref name="dij68">{{cite journal |title=The structure of the "THE"-multiprogramming system|date=1968-05-01 |first=Edsger W. |last=Dijkstra |author-link=Edsger W. Dijkstra |journal=Communications of the ACM |volume=11 |issue=5 |pages=341–346 |doi=10.1145/363095.363143 }}</ref>
 
<ref name="pra11">{{cite conference |title=Parallel programming: design of an overview class |date=June 2011 |first=Christoph |last=von Praun |conference=Proceedings of the 2011 ACM SIGPLAN X10 Workshop |number=2 |pages=1–6 |doi=10.1145/2212736.2212738 }}</ref>
}}
 
{{Parallel Computing}}
{{DEFAULTSORT:Explicit Parallelism}}
[[Category:ComputerParallel programmingcomputing]]
 
{{compu-sci-stub}}