Content deleted Content added
→Language support: Added Ada |
m →Description: Missing subject in clause |
||
Line 3:
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of a [[workflow]].<ref>{{cite book|last1=Quinn|first1=Michael J.|title=Parallel programming in C with MPI and openMP|date=2007|publisher=Tata McGraw-Hill Pub.|___location=New Delhi|isbn=978-0070582019|edition=Tata McGraw-Hill}}</ref>
As a simple example, if a system is running code on a 2-processor system ([[CPU]]s "a" & "b") in a [[wikt:parallel|parallel]] environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing the [[Run time (program lifecycle phase)|run time]] of the execution. The tasks can be assigned using [[Conditional (programming)|conditional statement]]s as described below.
|