Map (parallel pattern): Difference between revisions

Content deleted Content added
openCL compute kernels , broader than GPGPU as they run elsewhere
No edit summary
Line 1:
'''Map''' is an [[AlgorithmicProgramming skeletonidiom|idiom]] in [[parallel computing]] where a simple operation is applied to all elements of a sequence, potentially in parallel.<ref>{{cite conference |last1=Samadi |first1=Mehrzad |first2=Davoud Anoushe |last2=Jamshidi |first3=Janghaeng |last3=Lee |first4=Scott |last4=Mahlke |title=Paraprox: Pattern-based approximation for data parallel applications |conference=Proc. 19th Int'l Conf. on Architectural support for programming languages and operating systems |url=http://cccp.eecs.umich.edu/papers/samadi-asplos14.pdf|doi=10.1145/2541940.2541948|year=2014}}</ref> It is used to solve [[embarrassingly parallel]] problems: those problems that can be decomposed into independent subtasks, requiring no communication/synchronization between the subtasks except a [[Fork–join model|join]] or [[Barrier (computer science)|barrier]] at the end.
 
When applying the map pattern, one formulates an ''elemental function'' that captures the operation to be performed on a data item that represents a part of the problem, then applies this elemental function in one or more [[Thread (computing)|threads of execution]], [[hyperthread]]s, [[SIMD lanes]] or on [[distributed computing|multiple computers]].
Line 7:
==See also==
* [[Map (higher-order function)]]
* [[functionalFunctional programming]]
* [[Algorithmic skeleton]]
 
==References==