Loop-level parallelism: Difference between revisions

Content deleted Content added
m Types: parameter misuse;
See also: add doacross parallelism
 
(2 intermediate revisions by 2 users not shown)
Line 130:
* DOPIPE Parallelism
 
Each implementation varies slightly in how threads synchronize, if at all. In addition, parallel tasks must somehow be mapped to a process. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done statically.<ref>{{cite journal|last1=Kavi|first1=Krishna|title=Parallelization of DOALL and DOACROSS Loops-a Survey|url=https://www.researchgate.net/publication/220662641_Parallelization_of_DOALL_and_DOACROSS_Loops-a_Survey220662641}}</ref>
 
The process of parallelizing a sequential program can be broken down into the following discrete steps.<ref name="Solihin" /> Each concrete loop-parallelization below implicitly performs them.
Line 196:
=== DOACROSS parallelism ===
 
DOACROSS Parallelism exists where iterations of a loop are parallelized by extracting calculations that can be performed independently and running them simultaneously.<ref>{{citation|last1=Unnikrishnan|first1=Priya|title=Euro-Par 2012 Parallel Processing|volume=7484|pages=219–231|doi=10.1007/978-3-642-32820-6_23|series=Lecture Notes in Computer Science|year=2012|isbn=978-3-642-32819-0|chapter=A Practical Approach to DOACROSS Parallelization|s2cid=18571258 |chapterdoi-urlaccess=https://semanticscholar.org/paper/0885cd07bc4affd8f433bd3b4ee56012101ae09afree}}</ref>
 
Synchronization exists to enforce loop-carried dependence.
Line 268:
== See also ==
* [[Data parallelism]]
* [[DOACROSS parallelism]]
* [[Task parallelism]]
* Parallelism using different types of memory models like [[Shared memory|shared]] and [[Distributed memory|distributed]] and [[Message Passing Interface|Message Passing]]