Content deleted Content added
No edit summary |
No edit summary |
||
Line 1:
Automatic parallelization, also auto parallelization or Autoparallelization, refers to the use of a modern optimizing [[compiler]](a parallelizing compiler) to
The
== Compiler Analysis==
The compiler usually conducts two passes of analysis before actual parallelization: ▼
----
▲The compiler usually conducts two passes of analysis before actual parallelization in order to determine the following:
* Is it safe to parallelize the loop?
* Is it worthwile to parallize it?
The first
The second pass attempts to justify the parallization effort by comparing the theoretical execution time of the code after parallelization to the code's sequential execution time. Somewhat counterintuitively, code does not always benefit from parallel execution. The extra overhead that can be associated with using multiple processors can eat into the potential speedup of parallized code.
For example, code 1 can be auto-parallelized by a compiler because each iteration is independent to others and the final result of array z is always correct for any execution order of the iterations.▼
== A Brief Example of Auto-Parallelization==
----
▲
<pre>
!code 1
Line 17 ⟶ 23:
</pre>
On the other hand, code 2 below cannot be auto-parallelized, because the value of z(i) depends on the result of the previous iteration z(i-1).
<pre>
!code 2
Line 25 ⟶ 31:
</pre>
This does not mean that the code cannot be parallelized. However, current parallelizing compilers are not capable of bringing out these parallelisms automatically, and it is very questionable as to whether this code would benefit from parallelization in the first place.
|