Loop-level parallelism: Difference between revisions

Content deleted Content added
Aishwinth (talk | contribs)
No edit summary
See also: add doacross parallelism
 
(41 intermediate revisions by 12 users not shown)
Line 1:
'''Loop-level parallelism''' is a form of [[parallelism (computing)|parallelism]] in [[software programming]] that is concerned with extracting parallel tasks from [[Control flow#Loops|loops]]. The opportunity for loop-level parallelism often arises in computing programs where data is stored in [[random access]] [[data structures]]. Where a sequential program will iterate over the data structure and operate on indices one at a time, a program exploiting loop-level parallelism will use multiple [[thread (computing)|threads]] or [[process (computing)|processes]] which operate on some or all of the indices at the same time. Such parallelism provides a [[speedup]] to overall execution time of the program, typically in line with [[Amdahl's law]].
 
== Description ==
 
For simple loops, where each iteration is independent of the others, loop-level parallelism can be [[embarrassingly parallel]], as parallelizing only requires assigning a process to handle each iteration. However, many algorithms are designed to run sequentially, and fail when parallel processes [[race condition|race]] due to dependence within the code. Sequential algorithms are sometimes applicable to parallel contexts with slight modification. Usually, though, they require [[synchronization (computer science)|process synchronization]]. Synchronization can be either implicit, via [[message passing]], or explicit, via synchronization primitives like [[Semaphore (programming)|semaphores]].
 
== Example ==
Consider the following code operating on a list <code>L</code> of length <code>n</code>.
 
{{sxhl|lang=c|1=
for (int i = 0; i < n; ++i) {
S1: L[i] += 10;
}
}}
 
Each iteration of the loop takes the value from the current index of <code>L</code>, and increments it by 10. If statement <code>S1</code> takes <code>T</code> time to execute, then the loop takes time <code>n * T</code> to execute sequentially, ignoring time taken by loop constructs. Now, consider a system with <code>p</code> processors where <code>p > n</code>. If <code>n</code> threads run in parallel, the time to execute all <code>n</code> steps is reduced to <code>T</code>.
 
Less simple cases produce inconsistent, i.e. [[serializability|non-serializable]] outcomes. Consider the following loop operating on the same list <code>L</code>.
 
{{sxhl|lang=c|1=
for (int i = 1; i < n; ++i) {
S1: L[i] = L[i-1] + 10;
}
}}
 
Each iteration sets the current index to be the value of the previous plus ten. When run sequentially, each iteration is guaranteed that the previous iteration will already have the correct value. With multiple threads, [[process scheduling]] and other considerations prevent the execution order from guaranteeing an iteration will execute only after its dependence is met. It very well may happen before, leading to unexpected results. Serializability can be restored by adding synchronization to preserve the dependence on previous iterations.
 
== Dependencies in code ==
 
Transforming a sequential program into a parallel version requires that [[Data dependency|dependencies]] within the code are preserved. There are several types of dependenciesdependences that can be found: within code.<ref name="Solihin">{{cite book|last1=Solihin|first1=Yan|title=Fundamentals of Parallel Architecture|date=2016|publisher=CRC Press|___location=Boca Raton, FL|isbn=978-1-4822-1118-4}}</ref><ref>{{cite journalbook|last1=Goff|first1=Gina|title=Proceedings of the ACM SIGPLAN 1991 conference on Programming language design and implementation - PLDI '91|pages=15–29|chapter=Practical dependence testing|journaldoi=SIGPLAN|url=http://delivery.acm.org/10.1145/120000/113448/p15-goff113445.pdf?ip=152.7.224.7&id=113448&acc|year=ACTIVE%20SERVICE&key1991|isbn=6ABC8B4C00F6EE47%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=667494229&CFTOKEN=16697834&__acm__=1473798493_e58dcb18e741b6e6ac1c1c728fc5508d0897914287|accessdates2cid=132357293 September 2016}}</ref>
 
{| class="wikitable"
Line 26 ⟶ 53:
|}
 
In order to preserve the sequential behaviour of a loop when run in parallel, True Dependence must be preserved. Anti-Dependence and Output Dependence can be dealt with by giving each process or thread its own copy of variables (known as privatization). However, true dependence must be preserved, requiring process synchronization.<ref name="Solihin" />
 
=== Example of Truetrue Dependencedependence ===
{{sxhl|lang=c|1=
<source>
S1: int a, b;
S2: a = 2;
S3: b = a + 40;
}}
</source>
<code>S2 ->T S3</code>, meaning that S2 has a true dependence on S3 because S2 writes to the variable <code>a</code>, which S3 reads from.
 
=== Example of Antianti-dependence ===
{{sxhl|lang=c|1=
<source>
S1: int a, b = 40;
S2: a = b - 38;
S3: b = -1;
}}
</source>
<code>S2 ->A S3</code>, meaning that S2 has an anti-dependence on S3 because S2 reads from the variable <code>b</code> before S3 writes to it.
 
=== Example of Outputoutput-dependence ===
{{sxhl|lang=c|1=
<source>
S1: int a, b = 40;
S2: a = b - 38;
S3: a = 2;
}}
</source>
<code>S2 ->O S3</code>, meaning that S2 has an output dependence on S3 because both write to the variable <code>a</code>.
 
=== Example of Inputinput-dependence ===
{{sxhl|lang=c|1=
<source>
S1: int a, b, c = 2;
S2: a = c - 1;
S3: b = c + 1;
}}
</source>
<code>S2 ->I S3</code>, meaning that S2 has an input dependence on S3 because S2 and S3 both read from variable <code>c</code>.
 
Line 69 ⟶ 96:
* Loop-independent dependence
 
In loop-independent dependence, loops have inter-iteration dependence, but do not have dependence between iterations. Each iteration may be treated as a block and performed in parallel without other synchronization efforts.
 
In the following example code used for swapping the values of two array of length n, there is a loop-independent dependence of <code>S1[i] ->T S3[i]</code>.
{{sxhl|lang=c|1=
<syntaxhighlight>
for (int i = 1; i < n; i ++i) {
S1: tmp = a[i];
S2: a[i] = b[i];
S3: b[i] = tmp;
}
}}
</syntaxhighlight>
In loop-carried dependence, statements in an iteration of a loop depend on statements in another iteration of the loop. Loop-Carried Dependence uses a modified version of the dependence notation seen earlier. The following denotes a statement carries a true dependence to a later iteration, meaning that one iteration of the loop writes to a ___location read by a subsequent iteration of the loop.
 
Example of loop-carried dependence where <code>S1[i] ->T S1[i + 1]</code>, where <code>i</code> indicates the current iteration, and <code>i + 1</code> indicates the next iteration.
 
{{sxhl|lang=c|1=
<source>
for (int i = 1; i < n; i ++i) {
S1: a[i] = a[i - 1] + 1;
}
}}
</source>
 
=== Loop carried dependence graph ===
Line 93 ⟶ 120:
A Loop-carried dependence graph graphically shows the loop-carried dependencies between iterations. Each iteration is listed as a node on the graph, and directed edges show the true, anti, and output dependencies between each iteration.
 
== Types of Loop-level parallelism ==
 
There are a variety of methodologies for parallelizing loops.
Loops are one of the most common sources of parallelism found within code, and can be parallelized in a variety of ways, including:
 
* DISTRIBUTED Loop
* DOALL Parallelism
* DOACROSS Parallelism
* DOPIPE Parallelism
* DISTRIBUTED Loop
* HELIX <ref name="Parallelism in DOACROSS Loops">{{cite web|last1=Murphy|first1=Niall|title=Discovering and exploiting parallelism in DOACROSS loops|url=https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-882.pdf|website=University of Cambridge|accessdate=10 September 2016}}</ref>
* DOPIPE Parallelism
 
Each implementation varies slightly in how threads worksynchronize, together toif manageat dataall. In addition, parallel tasks must somehow be mapped to a processorprocess. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done dynamicallystatically.<ref>{{cite journal|last1=Kavi|first1=Krishna|title=Parallelization of DOALL and DOACROSS Loops-a Survey|accessdate=13 September 2016|refurl=https://www.researchgate.net/publication/220662641_Parallelization_of_DOALL_and_DOACROSS_Loops-a_Survey220662641}}</ref>
 
The process of parallelizing a sequential program can be broken down into the following discrete steps.<ref name="Solihin" /> Each concrete loop-parallelization below implicitly performs them.
 
{| class="wikitable"
|-
! Type
! Description
|-
| Decomposition
| The program is broken down into tasks, the smallest exploitable unit of concurrence.
|-
| Assignment
| Tasks are assigned to processes.
|-
| Orchestration
| Data access, communication, and synchronization of processes.
|-
| Mapping
| Processes are bound to processors.
|}
 
=== DISTRIBUTED loop ===
When a loop has a loop-carried dependence, one way to parallelize it is to distribute the loop into several different loops. Statements that are not dependent on each other are separated so that these distributed loops can be executed in parallel. For example, consider the following code.
{{sxhl|lang=c|1=
for (int i = 1; i < n; ++i) {
S1: a[i] = a[i-1] + b[i];
S2: c[i] += d[i];
}
}}
The loop has a loop carried dependence <code>S1[i] ->T S1[i+1]</code> but S2 and S1 do not have a loop-independent dependence so we can rewrite the code as follows.
{{sxhl|lang=c|1=
loop1: for (int i = 1; i < n; ++i) {
S1: a[i] = a[i-1] + b[i];
}
loop2: for (int i = 1; i < n; ++i) {
S2: c[i] += d[i];
}
}}
Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. Let's say the time of execution of S1 and S2 be <math>T_{S_1}</math> and <math>T_{S_2}
</math> then the execution time for sequential form of above code is <math>n*(T_{S_1}+T_{S_2})</math>, Now because we split the two statements and put them in two different loops, gives us an execution time of <math>n*T_{S_1} + T_{S_2}</math>. We call this type of parallelism either function or task parallelism.
 
=== DOALL parallelism ===
Line 109 ⟶ 176:
DOALL parallelism exists when statements within a loop can be executed independently (situations where there is no loop-carried dependence).<ref name="Solihin" /> For example, the following code does not read from the array <code>a</code>, and does not update the arrays <code>b, c</code>. No iterations have a dependence on any other iteration.
 
{{sxhl|lang=c|1=
<source>
for (int i = 0; i < n; i++i) {
S1: a[i] = b[i] + c[i];
}
}}
</source>
 
Let's say the time of one execution of S1 be <math>Ts1T_{S_1}</math> then the execution time for sequential form of above code is <math>n*Ts1T_{S_1}</math>, Now because DOALL Parallelism exists when all iterations are independent, speedupspeed-up may be achieved by executing all iterations in parallel which gives us an execution time of <math>Ts1T_{S_1}</math>(, which is the time taken for one iteration in sequential execution).
 
The following example, using a simplified pseudocodepseudo code, shows how a loop might be parallelized to execute each iteration independently.
{{sxhl|lang=c|1=
<source>
begin_parallelism();
for (int i = 0; i < n; i++i) {
S1: a[i] = b[i] + c[i];
end_parallelism();
}
block();
}}
</source>
 
=== DOACROSS parallelism ===
 
DOACROSS Parallelism exists where iterations of a loop are parallelized by extracting calculations that can be performed independently and running them simultaneously.<ref>{{cite bookcitation|last1=Unnikrishnan|first1=Priya|title=AEuro-Par Practical2012 ApproachParallel to DOACROSS ParallelizationProcessing|volume=7484|pages=219–231|urldoi=http://download10.springer.com1007/static/pdf/647/chp%253A10.1007%252F978978-3-642-32820-6_23.pdf?originUrl|series=http%3A%2F%2Flink.springer.com%2Fchapter%2F10.1007%2F978-3-642-32820-6_23&token2Lecture Notes in Computer Science|year=exp2012|isbn=1473797903~acl=%2Fstatic%2Fpdf%2F647%2Fchp%25253A10.1007%25252F978978-3-642-3282032819-6_23.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fchapter%252F10.1007%252F978-3-642-32820-6_23*~hmac=254f4a4ae181734da221ce06bcdb03ef232f25531eaf1195c6e1988a33b9d2590|accessdatechapter=13A SeptemberPractical 2016}}</ref>.Approach DOALL andto DOACROSS parallelismsParallelization|s2cid=18571258 is also type of [[Data parallelism|Data Parallelism]] since different parallel tasks perform same set of computations on different set of data.doi-access=free}}</ref name="Solihin" />
 
Synchronization exists to enforce loop-carried dependence.
 
Consider the following, synchronous loop with dependence <code>S1[i] ->T S1[i + 1]</code>.
{{sxhl|lang=c|1=
<source>
for (int i = 1; i < n; i++i) {
a[i] = a[i - 1] + b[i] + 1;
}
}}
</source>
 
Each loop iteration performs two actions
 
* Calculate <code>a[i - 1] + b[i] + 1</code>
* Assign the value to <code>a[i]</code>
 
Calculating the value <code>a[i - 1] + b[i] + 1</code>, and then performing the assignment can be decomposed into two lines(statements S1 and S2):
 
{{sxhl|lang=c|1=
<source>
S1: int tmp = b[i] + 1;
S2: a[i] = a[i - 1] + tmp;
}}
</source>
 
The first line, <code>int tmp = b[i] + 1;</code>, has no loop-carried dependence. The loop can then be parallelized by computing the temp value in parallel, and then synchronizing the assignment to <code>a[i]</code>.
 
{{sxhl|lang=c|1=
<source>
post(0);
for (int i = 1; i < n; i++i) {
 
S1: int tmp = b[i] + 1;
wait(i - 1);
 
S2: a[i] = a[i - 1] + tmp;
post(i);
}
}}
</source>
Let's say the time of execution of S1 and S2 be <math>T_{S_1}</math> and <math>T_{S_2}
</math> then the execution time for sequential form of above code is <math>n*(T_{S_1}+T_{S_2})</math>, Now because DOACROSS Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time of <math>T_{S_1} + n*T_{S_2}</math>.
 
=== DOPIPE parallelism ===
Line 170 ⟶ 239:
DOPIPE Parallelism implements pipelined parallelism for loop-carried dependence where a loop iteration is distributed over multiple, synchronized loops.<ref name="Solihin" /> The goal of DOPIPE is to act like an assembly line, where one stage is started as soon as there is sufficient data available for it from the previous stage.<ref>{{cite web|title=DoPipe: An Effective Approach to Parallelize Simulation|url=https://software.intel.com/sites/default/files/m/a/a/7/d/6/12758-MC_Forum_Zangbinyu_dopipe.pdf|website=Intel|accessdate=13 September 2016}}</ref>
 
Consider the following, synchronous code with dependence <code>S1[i] ->T S1[i + 1]</code>.
 
{{sxhl|lang=c|1=
<source>
for (int i = 1; i < n; i++i) {
S1: a[i] = a[i - 1] + b[i];
S2: c[i] = c[i] += a[i];
}
}}
</source>
 
S1 must be executed sequentially, but S2 has no loop-carried dependence. S2 could be executed in parallel using DOALL Parallelism after performing all calculations needed by S1 in series. However, the speedup is limited if this is done. A better approach is to parallelize such that the S2 corresponding to each S1 executes when said S1 is finished.
 
Implementing pipelined parallelism results in the following set of loops, where the second loop may execute for an index as soon as the first loop has finished its corresponding index.
 
{{sxhl|lang=c|1=
<source>
for (int i = 1; i < n; i++i) {
S1: a[i] = a[i - 1] + b[i];
post(i);
}
Line 191 ⟶ 260:
for (int i = 1; i < n; i++) {
wait(i);
S2: c[i] = c[i] += a[i];
}
}}
</source>
Let's say the time of execution of S1 and S2 be <math>T_{S_1}</math> and <math>T_{S_2}
 
</math> then the execution time for sequential form of above code is <math>n*(T_{S_1}+T_{S_2})</math>, Now because DOPIPE Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time of <math>n*T_{S_1} + (n/p)*T_{S_2}</math>, where {{mvar|p}} is the number of processor in parallel.
=== DISTRIBUTED loop ===
When a loop has a loop-carried dependence, another way to parallelize it is to distribute the loop into several different loops. Statements that are not dependent with each other are separated so that these distributed loops can be executed in parallel. For example, consider the following code.
<syntaxhighlight>
for (int i = 1; i < n; i ++) {
S1: a[i] = a[i -1] + b[i];
S2: c[i] = c[i] + d[i];
}
</syntaxhighlight>
The loop has a loop carried dependence <code>S1[i] ->T S1[i + 1]</code> but S2 and S1 do not have a loop-carried dependence so we can rewrite the code as follows.
<syntaxhighlight>
loop1: for (int i = 1; i < n; i ++) {
S1: a[i] = a[i -1] + b[i];
}
loop2: for (int i = 1; i < n; i ++) {
S2: c[i] = c[i] + d[i];
}
</syntaxhighlight>
Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. We call this type of parallelism either function or task parallelism.
 
== Steps to parallelization ==
 
The process of parallelizing a sequential program can be broken down into four discrete steps.<ref name="Solihin" />
 
{| class="wikitable"
|-
! Type
! Description
|-
| Decomposition
| The program is broken down into tasks, the smallest exploitable unit of concurrence.
|-
| Assignment
| Tasks are assigned to processes.
|-
| Orchestration
| Data access, communication, and synchronization of processes.
|-
| Mapping
| Processes are bound to processors.
|}
 
== See also ==
* [[Data parallelism]]
* [[DOACROSS parallelism]]
* [[Task parallelism]]
* Parallelism using different types of memory models like [[Shared memory|shared]] and [[Distributed memory|distributed]] and [[Message Passing Interface|Message Passing]]
 
== References ==