Task parallelism: Difference between revisions

Content deleted Content added
Fixed spelling error
Add sources and remove possible self promotion.
Line 1:
{{More footnotes|date=May 2011}}
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of [[computer code]] across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing [[Task (computing)|tasks]]—concurrently performed by [[Process (computing)|processes]] or [[Thread (computing)|threads]]—across different processors. ItIn contrastscontrast to [[data parallelism]] aswhich anotherinvolves formrunning the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.<ref>{{cite news|last1=Reinders|first1=James|title=Understanding task and data parallelism {{!}} ZDNet|url=http://www.zdnet.com/article/understanding-task-and-data-parallelism-3039289129/|accessdate=8 May 2017|work=ZDNet|date=10 September 2007|language=en}}</ref> A common type of task parallelism is [[Pipeline (computing)|pipelining]] which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.
 
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but is not a requirement. Communication usually takes place by passing data from one thread to the next as part of a [[workflow]].<ref>{{cite book|last1=Quinn|first1=Michael J.|title=Parallel programming in C with MPI and openMP|date=2007|publisher=Tata McGraw-Hill Pub.|___location=New Delhi|isbn=0070582017|edition=Tata McGraw-Hill ed.}}</ref>
 
As a simple example, if a system is running code on a 2-processor system ([[CPU]]s "a" & "b") in a [[wikt:parallel|parallel]] environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing the [[Run time (program lifecycle phase)|run time]] of the execution. The tasks can be assigned using [[Conditional (programming)|conditional statement]]s as described below.
 
Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data ([[data parallelism]]). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.<ref>{{citationcite neededweb|datelast1=JuneHicks|first1=Michael|title=Concurrency Basics|url=http://www.cs.umd.edu/class/fall2013/cmsc433/lectures/concurrency-basics.pdf|website=University of Maryland: Department of Computer Science|accessdate=8 May 20132017}}</ref>
 
 
Line 53:
 
==Language support==
 
===Task-parallel languages===
Examples of (fine-grained) task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]], which can also be considered as representing a "code static" software paradigm where the program has a static structure and the data is changing - as against a "data static" model where the data is not changing (or changing slowly) and the processing (applied methods) change (e.g. database search).{{clarify|date=October 2010}}
 
===General-purposes languages===
Task parallelism can be supported in general-purposes languages either built-in facilities or libraries. Notable examples include:
* C++ (Intel): [[Threading Building Blocks]]
Line 66 ⟶ 61:
* Java: [[Java concurrency]]
* .NET: [[Task Parallel Library]]
Examples of fine-grained task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]].
 
==See also==
Line 73 ⟶ 69:
*[[Parallel programming model]]
 
==NotesReferences==
{{Reflist}}
 
==References==
* Quinn Michael J, <cite>Parallel Programming in C with MPI and OpenMP</cite> McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
* [http://www.linkedin.com/in/kevcameron D. Kevin Cameron] coined terms "data static" and "code static". {{Citation needed|reason=in which document are these terms coined?|date=November 2013}}
 
{{Parallel Computing}}