Map (parallel pattern): Difference between revisions

Content deleted Content added
No edit summary
openCL compute kernels , broader than GPGPU as they run elsewhere
Line 3:
When applying the map pattern, one formulates an ''elemental function'' that captures the operation to be performed on a data item that represents a part of the problem, then applies this elemental function in one or more [[Thread (computing)|threads of execution]], [[hyperthread]]s, [[SIMD lanes]] or on [[distributed computing|multiple computers]].
 
Some parallel programming systems, such as [[OpenMP]] and [[Cilk]], have language support for the map pattern in the form of a '''parallel for loop''';<ref>{{cite web |title=Compilers and More: The Past, Present and Future of Parallel Loops |first=Michael |last=Wolfe |date=6 April 2015 |website=HPCwire |url=http://www.hpcwire.com/2015/04/06/compilers-and-more-the-past-present-and-future-of-parallel-loops/}}</ref> languages such as [[OpenCL]] and [[CUDA]] support elemental functions (as "[[GPGPU#KernelsCompute kernel|kernels]]") at the language level. The map pattern is typically combined with other parallel design patterns. E.g., map combined with category reduction gives the [[MapReduce]] pattern.<ref name="spp">{{cite book |author1=Michael McCool |author2=James Reinders |author3=Arch Robison |title=Structured Parallel Programming: Patterns for Efficient Computation |isbn= 978-0124159938|publisher=Elsevier |year=2013}}</ref>{{rp|106–107}}
 
==See also==