It has been suggested that this article be merged with Massively parallel. (Discuss) Proposed since March 2008. |
Massive parallel processing (MPP) is a term used in computer architecture to refer to a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, the Connection Machine, and the Ultracomputer.
Some years ago many of the most powerful supercomputers were MPP systems.
In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.
The earliest massively parallel processing systems all used serial computers as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost.
Through advances in Moore's Law, single-chip implementations of massively parallel processor arrays are becoming cost effective, and finding particular application in high performance embedded systems applications such as video compression. Examples include chips from Ambric, picoChip, and Tilera.
See also
- Fifth generation computer systems project
- Massively parallel
- Massively parallel processor array
- Multiprocessing
- Parallel computing
- Process oriented programming
- Shared nothing architecture
- Symmetric multiprocessing