Multiple instruction, multiple data: Difference between revisions

Content deleted Content added
m Fixed typos found with Wikipedia:Typo_Team/moss.
m abc
 
(20 intermediate revisions by 15 users not shown)
Line 1:
{{Short description|Computing technique employed to achieve parallelism}}
{{Flynn's Taxonomy}}
[[Image:MIMD.svg|right|225px]]
In [[computing]], '''MIMD''' ('''multiple instruction, multiple data''' ('''MIMD''') is a technique employed to achieve parallelism. Machines using MIMD have a number of [[processorsprocessor core]]s that function [[asynchrony (computing)|asynchronously]] and independently. At any time, different processors may be executing different instructions on different pieces of data.

MIMD architectures may be used in a number of application areas such as [[computer-aided design]]/[[computer-aided manufacturing]], [[Computer simulation|simulation]], [[Scientific modelling|modeling]], and as [[communication switches]]. MIMD machines can be of either [[Shared memory (interprocess communication)|shared memory]] or [[distributed memory]] categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the [[Bus network|bus-based]], extended, or [[hierarchical#Computation and electronics|hierarchical]] type. Distributed memory machines may have [[Grid network|hypercube]] or [[Mesh networking|mesh]] interconnection schemes.
 
==Examples==
An example of MIMD system is [[Xeon Phi|Intel Xeon Phi]], descended from [[Larrabee (microarchitecture)|Larrabee]] microarchitecture.<ref>{{Cite web|url=http://perilsofparallel.blogspot.gr/2008/09/larrabee-vs-nvidia-mimd-vs-simd.html|title = The Perils of Parallel: Larrabee vs. Nvidia, MIMD vs. SIMD|date = 19 September 2008}}</ref> These processors have multiple processing cores (up to 61 as of 2015) that can execute different instructions on different data.
 
Most parallel computers, as of 2013, are MIMD systems.<ref>{{cite web|url=http://software.intel.com/en-us/articles/mimd |title=ArchivedMIMD copy&#124; Intel® Developer Zone |accessdateaccess-date=2013-10-16 |deadurlurl-status=yesdead |archiveurlarchive-url=https://web.archive.org/web/20131016215430/http://software.intel.com/en-us/articles/mimd |archivedatearchive-date=2013-10-16 |df= }}</ref>
 
==Shared memory model==
TheIn shared memory model the processors are all connected to a "globally available" memory, via either [[software]] or hardware means. The [[operating system]] usually maintains its [[memory coherence]].<ref name="Ibaroudene-slides">Ibaroudene, Djaffer. "Parallel Processing, EG6370G: Chapter 1, Motivation and History." Lecture Slides. [[St. Mary's University, Texas|St Mary's University]], [[San Antonio, Texas]]. Spring 2008.</ref>
 
From a programmer's point of view, this memory model is better understood than the distributed memory model. Another advantage is that memory coherence is managed by the operating system and not the written program. Two known disadvantages are: scalability beyond thirty-two processors is difficult, and the shared memory model is less flexible than the distributed memory model.<ref name="Ibaroudene-slides"/>
 
There are many examples of shared memory (multiprocessors): UMA ([[Uniformuniform Memorymemory Accessaccess]]), COMA ([[Cache-only Onlymemory architecture|cache-only Memorymemory Accessaccess]]).<ref name=tanenbaum>{{cite book|author=[[Andrew S. Tanenbaum]]|author-link=Andrew S. Tanenbaum|title=Structured Computer Organization|pages=559–585|publisher=Prentice-Hall|year=1997|url=http://cwx.prenhall.com/bookbind/pubbooks/tanenbaum2/chapter0/deluxe.html|edition=4|isbn=978-0130959904|access-date=2013-03-15|archive-url=https://web.archive.org/web/20131201035507/http://cwx.prenhall.com/bookbind/pubbooks/tanenbaum2/chapter0/deluxe.html|archive-date=2013-12-01|url-status=dead}}</ref>
 
===Bus-based===
MIMD machines with shared memory have processors which share a common, central memory. In the simplest form, all processors are attached to a bus which connects them to memory. This means that every machine with shared memory shares a specific CM, common bus system for all the clients.
This means that every machine with shared memory shares a specific CM, common bus system for all the clients.
 
For example, if we consider a bus with clients A, B, C connected on one side and P, Q, R connected on the opposite side,
Line 23 ⟶ 25:
 
=== Hierarchical ===
MIMD machines with hierarchical shared memory use a hierarchy of buses (as, for example, in a "[[Fatfat tree]]") to give processors access to each other's memory. Processors on different boards may communicate through inter-nodal buses. Buses support communication between boards. With this type of architecture, the machine may support over nine thousand processors.
 
== Distributed memory ==
In distributed memory MIMD (multiple instruction, multiple data) machines, each processor has its own individual memory ___location. Each processor has no direct knowledge about other processor's memory. For data to be shared, it must be passed from one processor to another as a message. Since there is no shared memory, contention is not as great a problem with these machines. It is not economically feasible to connect a large number of processors directly to each other. A way to avoid this multitude of direct connections is to connect each processor to just a few others. This type of design can be inefficient because of the added time required to pass a message from one processor to another along the message path. The amount of time required for processors to perform simple message routing can be substantial. Systems were designed to reduce this time loss and [[Connection Machine|hypercube]] and [[Mesh networking|mesh]] are among two of the popular interconnection schemes.
 
Examples of distributed memory (multiple computers) include [[Massively parallel (computing)|MPP (massively parallel processors)]], [[Computer cluster|COW (clusters of workstations)]] and NUMA ([[Nonnon-Uniformuniform Memorymemory Accessaccess]]). The former is complex and expensive: lots ofMany super-computers coupled by broad-band networks. Examples include hypercube and mesh interconnections. COW is the "home-made" version for a fraction of the price.<ref name=tanenbaum/>
 
===Hypercube interconnection network===
Line 37 ⟶ 39:
 
==See also==
* [[Symmetric multiprocessing|SMP]]
* [[Non-Uniform Memory Access|NUMA]]
* [[Torus interconnect]]
* [[Flynn's taxonomy]]
* [[MapReduce]]
* [[Non-Uniform Memory Access|NUMA]]
* [[Symmetric multiprocessing|SMP]]
* [[SPMD]]
* [[Superscalar]]
* [[Torus interconnect]]
* [[Very long instruction word]]
 
Line 50 ⟶ 53:
{{CPU technologies}}
{{Parallel computing}}
{{Authority control}}
 
[[Category:Flynn's taxonomy]]