Massive parallel processing: Difference between revisions

Content deleted Content added
Embedded systems
Line 4:
 
In this class of computing, all of the processing elements are connected together to be one very large computer. This is in contrast to [[distributed computing]] where massive numbers of separate computers are used to solve a single problem.
 
The earliest massively parallel processing systems all used [[serial computer]]s as individual processing elements, in order to achieve the maximum number of independent units for a given size and cost.
 
Through advances in [[Moore's Law]], [[System-on-Chip]] (SOC) implementations of massively parallel architectures are becoming cost effective, and finding particular application in high performance [[embedded systems]] appplications such as [[video compression]]. Examples include chips from [[Ambric]], [[Picochip|picoChip]], and [[Tilera]].