Parallel computing: Difference between revisions

Content deleted Content added
m Reverted edits by 115.111.176.194 (talk) (HG) (3.1.19)
m Rephrased to fix clause that wasn't updated.
Line 1:
[[File:IBM Blue Gene P supercomputer.jpg|thumb|300px|IBM's [[Blue Gene|Blue Gene/P]] [[massively parallel]] [[supercomputer]].]]
 
'''Parallel computing''' is a type of [[computing|computation]] in which many calculations are carried out simultaneously, or the execution of [[Process (computing)|process]]es are carried out simutaneously.<ref>{{cite book|last=Gottlieb|first=Allan|title=Highly parallel computing|year=1989|publisher=Benjamin/Cummings|___location=Redwood City, Calif.|isbn=0-8053-0177-1|url=http://dl.acm.org/citation.cfm?id=160438|author2=Almasi, George S.}}</ref> operating on the principle that largeLarge problems can often be divided into smaller ones, which arecan then be solved at the same time. There are several different forms of parallel computing: [[Bit-level parallelism|bit-level]], [[Instruction-level parallelism|instruction-level]], [[Data parallelism|data]], and [[task parallelism]]. Parallelism has been employed for many years, mainly in [[high performance computing|high-performance computing]], but interest in it has grown lately due to the physical constraints preventing [[frequency scaling]].<ref>S.V. Adve ''et al.'' (November 2008). [http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The [[computer industry]] has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>Asanovic ''et al.'' Old [conventional wisdom]: Power is free, but [[transistor]]s are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[multi-core processor]]s.<ref name="View-Power">Asanovic, Krste ''et al.'' (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance… Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limits."</ref>
 
Parallel computing is closely related to [[concurrent computing]]—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by [[time-sharing]] on a single-core CPU).<ref name=waza>"Concurrency is not Parallelism", ''Waza conference'' Jan 11, 2012, [[Rob Pike]] ([http://talks.golang.org/2012/waza.slide slides]) ([http://vimeo.com/49718712 video])</ref><ref>{{cite web