Automatic parallelization: Difference between revisions

Content deleted Content added
Mattisse (talk | contribs)
unreferenced article
Example: Fixed notation to be true fortran, and not a pseudocode/C/fortran hybrid; specifically, using commas as do loop indices and parenths rather than square brackets to access array entries.
Line 16:
The [[Fortran]] code below can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of array <code>z</code> will be correct regardless of the execution order of the other iterations.
<pre>
do i=1 to, n
z[(i]) = x[(i]) + y[(i])
enddo
</pre>
Line 23:
On the other hand, the following code cannot be auto-parallelized, because the value of <code>z[i]</code> depends on the result of the previous iteration, <code>z[i-1]</code>.
<pre>
do i=2 to, n
z[(i]) = z[(i-1])*2
enddo
</pre>
Line 31:
 
<pre>
do i=2 to, n
z[(i]) = z[(1])*2**(i-1)
enddo
</pre><!-- Yes, it would be more efficient to use bit-shifting, but let's keep it simple. -->
 
However, current parallelizing compilers are not usually capable of bringing out these parallelisms automatically, and it is questionable whether this code would benefit from parallelization in the first place. <!-- Really? That seems doubtful. Maybe we should have an example of tricky-to-parallelize code like this, and an example of something actually impossible to parallelize? -->
 
==Difficulties==
Automatic parallelization by compilers or tools is very difficult due to the following reasons: