Multiple instruction, multiple data: Difference between revisions

Content deleted Content added
expanded introductory sentence, lc per MOS:CAPSACRS
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
m lc per MOS:CAPSACRS
Tags: Visual edit Mobile edit Mobile web edit Advanced mobile edit
Line 27:
In distributed memory MIMD machines, each processor has its own individual memory ___location. Each processor has no direct knowledge about other processor's memory. For data to be shared, it must be passed from one processor to another as a message. Since there is no shared memory, contention is not as great a problem with these machines. It is not economically feasible to connect a large number of processors directly to each other. A way to avoid this multitude of direct connections is to connect each processor to just a few others. This type of design can be inefficient because of the added time required to pass a message from one processor to another along the message path. The amount of time required for processors to perform simple message routing can be substantial. Systems were designed to reduce this time loss and [[Connection Machine|hypercube]] and [[Mesh networking|mesh]] are among two of the popular interconnection schemes.
 
Examples of distributed memory (multiple computers) include [[Massively parallel (computing)|MPP (massively parallel processors)]], [[Computer cluster|COW (clusters of workstations)]] and NUMA ([[Nonnon-Uniformuniform Memorymemory Accessaccess]]). The former is complex and expensive: Many super-computers coupled by broad-band networks. Examples include hypercube and mesh interconnections. COW is the "home-made" version for a fraction of the price.<ref name=tanenbaum/>
 
===Hypercube interconnection network===