Automatic parallelization: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 1:
Automatic parallelization, also auto parallelization or Autoparallelization, refers to use a modern optimizing [[compiler]](a parallelizing compiler) to compile sequential code to multi-threaded or vectorized (or even both) one in order to utilize a number of processors simultaneously in a [[shared-memory multiprocessor]] (SMP) machine. It aims to relief programers from tedious and error-prone manual parallelization process.
 
The major focus in auto parallelization is the [[loop]]s in codes, because loops take the most part of the execution time in general. AAn auto parallelization compiler tries to splits a loop up so that a number ofthe iterations of the loop can be executed on separate processors concurrently.
 
The compiler usually conducts two passes of analysis before actual parallelization:
Line 7:
* Is it worthwile to parallize it?
 
The first phase largely involves the [[data dependence analysis]] of the loop to decide that if each iteration of the loop can be executed independently to other iterationsothers. The seconde phase tries to justify the parallization effort by comparing final outcome of the parallelized code to the execution time of theoriginal sequential code because parallization introduces extra overheads.
 
For example, code 1 can be auto-parallelized by a compiler because each iteration is independent to others and the final result of array z is always correct for any execution order of the iterations.