Content deleted Content added
Notifying subject page of move discussion on Talk:MIMD Tag: Reverted |
m abc |
||
(7 intermediate revisions by 7 users not shown) | |||
Line 1:
{{Short description|Computing technique employed to achieve parallelism}}
[[Image:MIMD.svg|right|225px]]
In [[computing]], '''multiple instruction, multiple data''' ('''MIMD''') is a technique employed to achieve parallelism. Machines using MIMD have a number of [[
MIMD architectures may be used in a number of application areas such as [[computer-aided design]]/[[computer-aided manufacturing]], [[Computer simulation|simulation]], [[Scientific modelling|modeling]], and as [[communication switches]]. MIMD machines can be of either [[Shared memory (interprocess communication)|shared memory]] or [[distributed memory]] categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the [[Bus network|bus-based]], extended, or [[hierarchical#Computation and electronics|hierarchical]] type. Distributed memory machines may have [[Grid network|hypercube]] or [[Mesh networking|mesh]] interconnection schemes.
Line 9:
An example of MIMD system is [[Xeon Phi|Intel Xeon Phi]], descended from [[Larrabee (microarchitecture)|Larrabee]] microarchitecture.<ref>{{Cite web|url=http://perilsofparallel.blogspot.gr/2008/09/larrabee-vs-nvidia-mimd-vs-simd.html|title = The Perils of Parallel: Larrabee vs. Nvidia, MIMD vs. SIMD|date = 19 September 2008}}</ref> These processors have multiple processing cores (up to 61 as of 2015) that can execute different instructions on different data.
Most parallel computers, as of 2013, are MIMD systems.<ref>{{cite web|url=http://software.intel.com/en-us/articles/mimd |title=
==Shared memory model==
Line 28:
== Distributed memory ==
In distributed memory MIMD
Examples of distributed memory (multiple computers) include [[Massively parallel (computing)|MPP (massively parallel processors)]], [[Computer cluster|COW (clusters of workstations)]] and NUMA ([[non-uniform memory access]]). The former is complex and expensive: Many super-computers coupled by broad-band networks. Examples include hypercube and mesh interconnections. COW is the "home-made" version for a fraction of the price.<ref name=tanenbaum/>
Line 39:
==See also==
* [[Symmetric multiprocessing|SMP]]▼
* [[Non-Uniform Memory Access|NUMA]]▼
* [[Torus interconnect]]▼
* [[Flynn's taxonomy]]
* [[MapReduce]]
▲* [[Non-Uniform Memory Access|NUMA]]
▲* [[Symmetric multiprocessing|SMP]]
* [[SPMD]]
* [[Superscalar]]
▲* [[Torus interconnect]]
* [[Very long instruction word]]
|