Content deleted Content added
No edit summary |
|||
Line 131:
Each implementation varies slightly in how threads work together to manage data. In addition, parallel tasks must somehow be mapped to a processor. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done dynamically.<ref>{{cite journal|last1=Kavi|first1=Krishna|title=Parallelization of DOALL and DOACROSS Loops-a Survey|accessdate=13 September 2016|ref=https://www.researchgate.net/publication/220662641_Parallelization_of_DOALL_and_DOACROSS_Loops-a_Survey}}</ref>
The process of parallelizing a sequential program can be broken down into
{| class="wikitable"▼
|-▼
! Type▼
! Description▼
|-▼
| Decomposition▼
| The program is broken down into tasks, the smallest exploitable unit of concurrence.▼
|-▼
| Assignment▼
| Tasks are assigned to processes.▼
|-▼
| Orchestration▼
| Data access, communication, and synchronization of processes.▼
|-▼
| Mapping▼
| Processes are bound to processors.▼
|}▼
=== DOALL parallelism ===
Line 240 ⟶ 260:
</syntaxhighlight>
Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. We call this type of parallelism either function or task parallelism.
▲The process of parallelizing a sequential program can be broken down into four discrete steps.<ref name="Solihin" />
▲{| class="wikitable"
▲|-
▲! Type
▲! Description
▲|-
▲| Decomposition
▲| The program is broken down into tasks, the smallest exploitable unit of concurrence.
▲|-
▲| Assignment
▲| Tasks are assigned to processes.
▲|-
▲| Orchestration
▲| Data access, communication, and synchronization of processes.
▲|-
▲| Mapping
▲| Processes are bound to processors.
▲|}
== References ==
|