Massive parallel processing

This is an old revision of this page, as edited by Davepape (talk | contribs) at 15:37, 29 April 2008 (correct name). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Massive parallel processing (MPP) is a term used in computer architecture to refer to a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, the Connection Machine, and the Ultracomputer.

Today's most powerful supercomputers are all MP systems such as Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, ASCI Thor's Hammer.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

The earliest massively parallel processing systems all used serial computers as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost.

Through advances in Moore's Law, System-on-Chip (SOC) implementations of massively parallel architectures are becoming cost effective, and finding particular application in high performance embedded systems appplications such as video compression. Examples include chips from Ambric, picoChip, and Tilera.

See also