Explicit parallelism: Difference between revisions

Content deleted Content added
No edit summary
 
No edit summary
 
(31 intermediate revisions by 17 users not shown)
Line 1:
{{short description |Parallelism expressed within computations}}
{{Use American English |date=February 2024}}
{{Use mdy dates |date=February 2024}}
{{More citations needed |date=February 2024}}
In [[computer programming]], '''explicit parallelism''' is the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.<ref name="pra11" /> Most parallel primitives are related to process synchronization, communication and process partitioning.<ref name="dij68" /> As they rarely contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.
 
The advantage of explicit [[parallel programming]] is increased programmer control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.
Explicit-from Latin explicare, short for explicitus-to unroll or unfold. To imagine a well crafted utility such as explicit parallelism that can operate to process a multitude of numbers in a computational fashion in conjunction with other processors sounds very attractive. A certain accounting company advertises a networking system that processes over 57,000 transaction a second. That sounds very impressive. To involve the same idea into the realm of a media approach, say science fiction for example, one could just picture a view of realities all streamlined in threads spread out over the entire universe or in existentialistic terms, existence.
 
It has been said that the gift of a true writer happens to be his talent for saying, describing, exploring, and lauding every single topic but the point he really intends to make. This may remain subject to criticism and interpretation. The writers who write for everyone last for a period. The writers who write for themselves last an eternity. Life unfolds. Our conscous experience and reach both external and internal unfold on this amazing journey. The calming effects of serene concousness may exist and work together in planes, dimensions, realities, experiences, and even memories that are unparallel. An Adrenaline brand computer housing dual AMD processors and lighting fast processor speeds for both home and office gets sold to a buyer online. The transaction occured at 5:30 p.m. [17:30] Eastern time, August 24th, 2005. The shipping address, credit card information, purchaser name, and purchaser buying history are all recorded and processed into a networking thread of the above described company that handles 56,999 other online purchases in that same instant. Here could be the description of non paralleled digital event that unfolded into existence that will lead to an explicit parallelisms in factual recollection. The explicit parallelisms being of course the Adrenaline brand computer with dual AMD processors, and the networking company that handled the online transaction. As each moment of our conscous experience unfolds we discover worlds of insight that aid in our wellbeing, comfort, and pleasure.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known as [[implicit parallelism]].
 
==Programming languages that support explicit parallelism==
Some of the programming languages that support explicit parallelism are:
*[[Ada programming language|Ada]]
*[[Ease programming language|Ease]]
*[[Erlang (programming language)|Erlang]]
*[[Java (programming language)|Java]]
*[[JavaSpaces]]
*[[Message Passing Interface]]
*[[Occam (programming language)|Occam]]
*[[Parallel Virtual Machine]]
 
== References ==
{{reflist |refs=
<ref name="dij68">{{cite journal |title=The structure of the "THE"-multiprogramming system|date=1968-05-01 |first=Edsger W. |last=Dijkstra |author-link=Edsger W. Dijkstra |journal=Communications of the ACM |volume=11 |issue=5 |pages=341–346 |doi=10.1145/363095.363143 }}</ref>
 
<ref name="pra11">{{cite conference |title=Parallel programming: design of an overview class |date=June 2011 |first=Christoph |last=von Praun |conference=Proceedings of the 2011 ACM SIGPLAN X10 Workshop |number=2 |pages=1–6 |doi=10.1145/2212736.2212738 }}</ref>
}}
 
{{Parallel Computing}}
{{DEFAULTSORT:Explicit Parallelism}}
[[Category:Parallel computing]]
 
{{compu-sci-stub}}