Content deleted Content added
→Language support: Added Ada |
Guy Harris (talk | contribs) Simplify link. |
||
(15 intermediate revisions by 14 users not shown) | |||
Line 1:
{{More footnotes|date=May 2011}}
{{short description|Form of parallelization of computer code}}
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of [[Source code|computer code]] across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing [[Task (computing)|tasks]]—concurrently performed by [[Process (computing)|processes]] or [[Thread (computing)|threads]]—across different processors. In contrast to [[data parallelism]] which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.<ref>{{cite news|last1=Reinders|first1=James|title=Understanding task and data parallelism
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of a [[workflow]].<ref>{{cite book|last1=Quinn|first1=Michael J.|title=Parallel programming in C with MPI and openMP|date=2007|publisher=Tata McGraw-Hill Pub.|___location=New Delhi|isbn=978-0070582019|edition=Tata McGraw-Hill}}</ref>
As a simple example, if a system is running code on a 2-processor system ([[CPU]]s "a" & "b") in a [[wikt:parallel|parallel]] environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing the [[Run time (program lifecycle phase)|run time]] of the execution. The tasks can be assigned using [[Conditional (programming)|conditional statement]]s as described below.
Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data ([[data parallelism]]). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.<ref>{{cite web|last1=Hicks|first1=Michael|title=Concurrency Basics|url=http://www.cs.umd.edu/class/fall2013/cmsc433/lectures/concurrency-basics.pdf|website=University of Maryland: Department of Computer Science|
'''Thread-level parallelism''' ('''TLP''') is the [[Parallel computing|parallelism]] inherent in an application that runs multiple [[Thread (computer science)|threads]] at once. This type of parallelism is found largely in applications written for commercial [[Server (computing)|server]]s such as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work.
Line 18 ⟶ 19:
program:
...
if CPU = "a" then
do task "A"
else if CPU="b" then
do task "B"
end if
...
Line 28 ⟶ 29:
The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.
*In an [[SPMD]] (single program, multiple data) system, both [[CPU]]s will execute the code.
*In a parallel environment, both will have access to the same data.
*The "if" clause differentiates between the CPUs. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
Line 52 ⟶ 53:
==Language support==
Task parallelism can be supported in general-
* Ada: Tasks (built-in)
* C++ (Intel): [[Threading Building Blocks]]
* C++ (Intel): [[Cilk Plus]]
* C++ (Open Source/Apache 2.0): [[RaftLib]]
* C, C++, Objective-C, Swift (Apple): [[Grand Central Dispatch]]
* D: [[Task (computing)|tasks]] and [[Fiber (computer science)|fibers]]
* Delphi (System.Threading.TParallel)▼
* Go: [[goroutine]]s
* Java: [[Java concurrency]]
* .NET: [[Task Parallel Library]]
▲* Delphi (System.Threading.TParallel)
Examples of fine-grained task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]].
|