Content deleted Content added
No edit summary |
Reverted good faith edits by 212.112.100.83 (talk): No reason for removal (Twinkle) |
||
Line 1:
{{More footnotes|date=May 2011}}
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of [[computer code]] across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing [[Task (computing)|tasks]]—concurrently performed by [[Process (computing)|processes]] or [[Thread (computing)|threads]]—across different processors. In contrast to [[data parallelism]] which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.<ref>{{cite news|last1=Reinders|first1=James|title=Understanding task and data parallelism {{!}} ZDNet|url=http://www.zdnet.com/article/understanding-task-and-data-parallelism-3039289129/|accessdate=8 May 2017|work=ZDNet|date=10 September 2007|language=en}}</ref> A common type of task parallelism is [[Pipeline (computing)|pipelining]] which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of a [[workflow]].<ref>{{cite book|last1=Quinn|first1=Michael J.|title=Parallel programming in C with MPI and openMP|date=2007|publisher=Tata McGraw-Hill Pub.|___location=New Delhi|isbn=978-0070582019|edition=Tata McGraw-Hill}}</ref>
|