Task parallelism: Difference between revisions

Content deleted Content added
fixed disamb link
Simplify link.
 
(48 intermediate revisions by 43 users not shown)
Line 1:
{{More footnotes|date=May 2011}}
{{short description|Form of parallelization of computer code}}
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of computer code across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing execution processes (threads) across different parallel computing nodes. It contrasts to [[data parallelism]] as another form of parallelism.
'''Task parallelism''' (also known as '''function parallelism''' and '''control parallelism''') is a form of [[parallelization]] of [[Source code|computer code]] across multiple [[Central processing unit|processor]]s in [[parallel computing]] environments. Task parallelism focuses on distributing [[Task (computing)|tasks]]—concurrently performed by [[Process (computing)|processes]] or [[Thread (computing)|threads]]—across different processors. In contrast to [[data parallelism]] which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.<ref>{{cite news|last1=Reinders|first1=James|title=Understanding task and data parallelism|url=https://www.zdnet.com/article/understanding-task-and-data-parallelism/|access-date=8 May 2017|work=ZDNet|date=10 September 2007|language=en}}</ref> A common type of task parallelism is [[Pipeline (computing)|pipelining]], which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.
 
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place usuallyby to passpassing data from one thread to the next as part of a [[workflow]].<ref>{{cite book|last1=Quinn|first1=Michael J.|title=Parallel programming in C with MPI and openMP|date=2007|publisher=Tata McGraw-Hill Pub.|___location=New Delhi|isbn=978-0070582019|edition=Tata McGraw-Hill}}</ref>
 
As a simple example, if wea aresystem is running code on a 2-processor system ([[CPU]]s "a" & "b") in a [[wikt:parallel|parallel]] environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task '"B" simultaneously, thereby reducing the [[Run time (program lifecycle phase)|run time]] of the execution. The tasks can be assigned using [[Conditional statement (logicprogramming)|conditional statement]]s as described below.
 
Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data ([[data parallelism]]). Most real programs fall somewhere on a continuum between Tasktask parallelism and Datadata parallelism.<ref>{{cite web|last1=Hicks|first1=Michael|title=Concurrency Basics|url=http://www.cs.umd.edu/class/fall2013/cmsc433/lectures/concurrency-basics.pdf|website=University of Maryland: Department of Computer Science|access-date=8 May 2017}}.</ref>
 
'''Thread-level parallelism''' ('''TLP''') is the [[Parallel computing|parallelism]] inherent in an application that runs multiple [[Thread (computer science)|threads]] at once. This type of parallelism is found largely in applications written for commercial [[Server (computing)|server]]s such as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work.
 
The exploitation of thread-level parallelism has also begun to make inroads into the desktop market with the advent of [[multi-core]] microprocessors. This has occurred because, for various reasons, it has become increasingly impractical to increase either the clock speed or instructions per clock of a single core. If this trend continues, new applications will have to be designed to utilize multiple threads in order to benefit from the increase in potential computing power. This contrasts with previous microprocessor innovations in which existing code was automatically sped up by running it on a newer/faster computer.
 
==Example==
Line 14 ⟶ 19:
program:
...
if CPU = "a" then
do task "A"
else if CPU="b" then
do task "B"
end if
...
Line 24 ⟶ 29:
The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.
 
*In an [[SPMD]] (single program, multiple data) system, both [[CPU]]s will execute the code.
*In a parallel environment, both will have access to the same data.
*The "if" clause differentiates between the CPU'sCPUs. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
*Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.
 
Line 47 ⟶ 52:
This concept can now be generalized to any number of processors.
 
===JVMLanguage Example=support==
Task parallelism can be supported in general-purpose languages by either built-in facilities or libraries. Notable examples include:
 
* Ada: Tasks (built-in)
Similar to the previous example, Task Parallelism is also possible using the Java Virtual Machine [[JVM]].
* C++ (Intel): [[Threading Building Blocks]]
 
* C++ (Intel): [[Cilk Plus]]
The code below illustrates task parallelism on the [[JVM]] using the commercial third-party Ateji PX extension<ref>http://www.ateji.com/px/patterns.html#task Task Parallelism using Ateji PX, an extension of Java</ref>.
* C++ (Open Source/Apache 2.0): [[RaftLib]]
Statements or blocks of statements can be composed in parallel using the || operator inside a parallel block, introduced with square brackets:
* C, C++, Objective-C, Swift (Apple): [[Grand Central Dispatch]]
<source lang="java">
* D: [[Task (computing)|tasks]] and [[Fiber (computer science)|fibers]]
[
* Delphi (System.Threading.TParallel)
|| a++;
* Go: [[goroutine]]s
|| b++;
* Java: [[Java concurrency]]
]
* .NET: [[Task Parallel Library]]
</source>
Examples of fine-grained task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]].
or in short form:
<source lang="java">
[ a++; || b++; ]
</source>
Each parallel statement within the composition is called a branch. We purposely avoid using the terms task or process which mean very different things in different contexts.
 
==Languages==
 
Examples of (fine-grained) task-parallel languages can be found in the realm of [[Hardware Description Language]]s like [[Verilog]] and [[VHDL]], which can also be considered as representing a "code static" software paradigm where the program has a static structure and the data is changing - as against a "data static" model where the data is not changing (or changing slowly) and the processing (applied methods) change (e.g. database search).{{clarify|date=October 2010}}
 
==See also==
*[[Algorithmic skeleton]]
*[[Data parallelism]]
*[[AlgorithmicFork–join skeletonmodel]]
*[[Parallel programming model]]
 
==NotesReferences==
{{Reflist}}
 
==References==
* Quinn Michael J, <cite>Parallel Programming in C with MPI and OpenMP</cite> McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
* [http://www.linkedin.com/in/kevcameron D. Kevin Cameron] coined terms "data static" and "code static".
 
{{Parallel Computing}}
Line 84 ⟶ 79:
{{DEFAULTSORT:Task Parallelism}}
[[Category:Parallel computing]]
[[Category:Threads (computing)]]
 
[[ar:توازي على مستوى المهام]]
[[it:Parallelismo a livello di thread]]
[[ja:タスク並列性]]
[[simple:Task parallelism]]
[[zh:任务并行]]