Massive parallel processing

This is an old revision of this page, as edited by Dyl (talk | contribs) at 20:37, 22 October 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Massive parallelism is a term used in computer architecture, reconfigurable computing, application-specific integrated circuit (ASIC) and field-programmable gate array (FPGA) design. It signifies the presence of many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, and the Connection Machine.

Today's most powerful supercomputers are all MPP systems such as Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, ASCI Thor's Hammer.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

See also