Massive parallel processing

This is an old revision of this page, as edited by Erik9bot (talk | contribs) at 17:59, 5 July 2009 (add Category:Articles lacking sources (Erik9bot)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Massive parallel processing (MPP) is a term used in computer architecture to refer to a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, the Connection Machine, and the Ultracomputer.

Some years ago many of the most powerful supercomputers were MPP systems.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

The earliest massively parallel processing systems all used serial computers as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost.

Through advances in Moore's Law, single-chip implementations of massively parallel processor arrays are becoming cost effective, and finding particular application in high performance embedded systems applications such as video compression. Examples include chips from Ambric, picoChip, and Tilera.

See also