Parallel computing: Difference between revisions

Content deleted Content added
No edit summary
Tag: repeating characters
Line 1:
{{Programming paradigms}}'''Parallel computing''' is a form of [[computing|computation]] in which many calculations are carried out simultaneously,<ref>Almasi, G.S. and A. Gottlieb (1989). [http://portal.acm.org/citation.cfm?id=1011116.1011127 ''Highly Parallel Computing'']. Benjamin-Cummings publishers, Redwood City, CA.</ref> operating on the principle that large problems can often be divided into smaller ones, which are then solved [[Concurrency (computer science)|concurrently]] ("in parallel"). There are several different forms of parallel computing: [[bit-level parallelism|bit-level]], [[instruction level parallelism|instruction level]], [[data parallelism|data]], and [[task parallelism]]. Parallelism has been employed for many years, mainly in [[high performance computing|high-performance computing]], but interest in it has grown lately due to the physical constraints preventing [[frequency scaling]].<ref>S.V. Adve et al. (November 2008). [http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits – increased clock frequency and smarter but increasingly complex architectures – are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[Multi-core (computing)|multicore processor]]s.<ref name="View-Power">Asanovic, Krste et al. (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance&nbsp;... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."</ref>
 
Parallel computers can be roughly classified according to the level at which the hardware supports parallelism—with [[multi-core]] and [[Symmetric multiprocessing|multi-processor]] computers having multiple processing elements within a single machine, while [[Computer cluster|clusters]], [[Massive parallel processing|MPPs]], and [[Grid computing|grids]] use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. hhhhhhh
 
[[Parallel algorithm|Parallel computer programs]] are more difficult to write than sequential ones,<ref>[[David A. Patterson (scientist)|Patterson, David A.]] and [[John L. Hennessy]] (1998). ''Computer Organization and Design'', Second Edition, Morgan Kaufmann Publishers, p.&nbsp;715. ISBN 1558604286.</ref> because concurrency introduces several new classes of potential [[software bug]]s, of which [[race condition]]s are the most common. [[Computer networking|Communication]] and [[Synchronization (computer science)|synchronization]] between the different subtasks are typically one of the greatest obstacles to getting good parallel program performance.