Automatic parallelization: Difference between revisions

Content deleted Content added
mNo edit summary
Line 1:
'''Automatic parallelization''' (also known as '''auto parallelization''' or '''Autoparallelization'''), refers to the use of a modern optimizing parallelizing [[compiler]] to convert sequential [[source code|code]] into [[multi-threaded]] or vectorized (or even both) code in order to utilize multiple processors simultaneously in a shared-memory [[multiprocessor]] (SMP) machine. The goal of automatic parallelization is to relieve programers from the tedious and error-prone manual parallelization process. Though highly improved since several decades, full automatic parallelization of sequential programs by compilers remains a grand challenge due to the complex [[program analysis]] needed and the unknown factors (such as input data range) during compilation.
 
The programming control structures on which auto parallelization places the most focus are [[Control flow#Loops|loop]]s, because, in general, most of the [[execution time]] of a program takes place inside some form of loop. An auto parallelization compiler tries to split up a loop so that its [[iterations]] can be executed on separate [[processors]] concurrently.