Content deleted Content added
Twhuang-uiuc (talk | contribs) |
Guy Harris (talk | contribs) Simplify link. |
||
(7 intermediate revisions by 7 users not shown) | |||
Line 1:
{{More footnotes|date=May 2011}}
{{short description|Form of parallelization of computer code}}
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of [[Source code|computer code]] across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing [[Task (computing)|tasks]]—concurrently performed by [[Process (computing)|processes]] or [[Thread (computing)|threads]]—across different processors. In contrast to [[data parallelism]] which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.<ref>{{cite news|last1=Reinders|first1=James|title=Understanding task and data parallelism
==Description==
Line 7 ⟶ 8:
As a simple example, if a system is running code on a 2-processor system ([[CPU]]s "a" & "b") in a [[wikt:parallel|parallel]] environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing the [[Run time (program lifecycle phase)|run time]] of the execution. The tasks can be assigned using [[Conditional (programming)|conditional statement]]s as described below.
Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data ([[data parallelism]]). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.<ref>{{cite web|last1=Hicks|first1=Michael|title=Concurrency Basics|url=http://www.cs.umd.edu/class/fall2013/cmsc433/lectures/concurrency-basics.pdf|website=University of Maryland: Department of Computer Science|
'''Thread-level parallelism''' ('''TLP''') is the [[Parallel computing|parallelism]] inherent in an application that runs multiple [[Thread (computer science)|threads]] at once. This type of parallelism is found largely in applications written for commercial [[Server (computing)|server]]s such as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work.
Line 52 ⟶ 53:
==Language support==
Task parallelism can be supported in general-
* Ada: Tasks (built-in)
* C++ (Intel): [[Threading Building Blocks]]
* C++ (Intel): [[Cilk Plus]]
* C++ (Open Source/Apache 2.0): [[RaftLib]]
* C, C++, Objective-C, Swift (Apple): [[Grand Central Dispatch]]▼
▲* C, C++, Objective-C (Apple): [[Grand Central Dispatch]]
* D: [[Task (computing)|tasks]] and [[Fiber (computer science)|fibers]]
* Delphi (System.Threading.TParallel)▼
* Go: [[goroutine]]s
* Java: [[Java concurrency]]
* .NET: [[Task Parallel Library]]
▲* Delphi (System.Threading.TParallel)
Examples of fine-grained task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]].
|