Content deleted Content added
first draft |
mNo edit summary |
||
Line 2:
In computer programming, '''explicit parallelism''' is the representation
of concurrent computation on program code by means of primitives
in the form of special-purpose directives or function calls. Most parallel primitives are related to process sinchronization, communication or task partitioning. As they seldom contribute to actually carry out the
intendend computation of the program, their computational cost is often contabilized
as [[parallelization overhead]].
Line 12 ⟶ 9:
control over the parallel execution of his or her code. A skilled
parallel programmer takes advantage of explicit parallelism to produce
very efficient code. However, programming with explicit parallelism is often difficult, especially for
non computing professionals, because of the extra work involved in planning
the task division and sinchronization.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler that automatically extracts the parallelism inherent to computations (see [[implicit parallelism]]).
== Programming with explicit parallelism ==
|