Reduce (parallel pattern): Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
m Add: DUPLICATE_arxiv. You can use this bot yourself. Report bugs here.
Line 74:
[[MapReduce]] relies heavily on efficient reduction algorithms to process big data sets, even on huge clusters.<ref>{{Cite journal|last=Lämmel|first=Ralf|title=Google's MapReduce programming model — Revisited|url=https://doi.org/10.1016/j.scico.2007.07.001|journal=Science of Computer Programming|volume=70|issue=1|pages=1–30|doi=10.1016/j.scico.2007.07.001|year=2008}}</ref><ref>{{Cite journal|last=Senger|first=Hermes|last2=Gil-Costa|first2=Veronica|last3=Arantes|first3=Luciana|last4=Marcondes|first4=Cesar A. C.|last5=Marín|first5=Mauricio|last6=Sato|first6=Liria M.|last7=da Silva|first7=Fabrício A.B.|date=2016-06-10|title=BSP cost and scalability analysis for MapReduce operations|url=http://onlinelibrary.wiley.com/doi/10.1002/cpe.3628/abstract|journal=Concurrency and Computation: Practice and Experience|language=en|volume=28|issue=8|pages=2503–2527|doi=10.1002/cpe.3628|issn=1532-0634}}</ref>
 
Some parallel [[Sorting algorithm|sorting]] algorithms use reductions to be able to handle very big data sets.<ref>{{Cite journalarxiv|last=Axtmann|first=Michael|last2=Bingmann|first2=Timo|last3=Sanders|first3=Peter|last4=Schulz|first4=Christian|date=2014-10-24|title=Practical Massively Parallel Sorting|DUPLICATE_arxiv=1410.67541410.6754|arxiveprint=1410.6754}}</ref>
 
== References ==