Massive parallel processing

This is an old revision of this page, as edited by Radagast83 (talk | contribs) at 06:17, 20 October 2006 (merged other article into this one). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Massive parallelism is a term used in computer architecture, reconfigurable computing, application-specific integrated circuit (ASIC) and field-programmable gate array (FPGA) design. It signifies the presence of several (many) independent arithmetic units, that run in parallel. Early examples of such a system are the Distributed Array Processor, the Goodyear MPP, and the Connection Machine.

Today's most powerful supercomputers are all MPP systems such as Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, ASCI Thor's Hammer.

In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to distributed computing where massive numbers of separate computers are used to solve a single problem.

See also