Symmetric multiprocessing: Difference between revisions

Content deleted Content added
No edit summary
Tags: Reverted references removed Visual edit Mobile edit Mobile web edit
m Reverting possible vandalism by 103.95.150.59 to version by Liangsizhuang. Report False Positive? Thanks, ClueBot NG. (4336306) (Bot)
Line 3:
 
[[File:SMP - Symmetric Multiprocessor System.svg|thumb|upright=2|Diagram of a symmetric multiprocessing system]]
'''Symmetric multiprocessing''' or '''shared-memory multiprocessing'''<ref>{{cite book |last1=Patterson |first1=David |last2=Hennessy |first2=John |author-link1=David Patterson (computer scientist) |author-link2=John L. Hennessy |date=2018 |title=Computer Organisation and Design: The Hardware/Software Interface |___location=Cambridge, United States |publisher=Morgan Kaufmann |page=509 |isbn=978-0-12-812275-4|edition=RISC-V }}</ref> ('''SMP''') involves a [[multiprocessor]] computer hardware and software architecture where two or more identical processors are connected to a single, shared [[main memory]], have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of [[multi-core processor]]s, the SMP architecture applies to the cores, treating them as separate processors.
 
Professor John D. Kubiatowicz considers traditionally SMP systems to contain processors without caches.<ref>{{cite conference|url=https://parlab.eecs.berkeley.edu/2013bootcampagenda|conference=2013 Short Course on Parallel Programming|author=John Kubiatowicz|title=Introduction to Parallel Architectures and Pthreads}}</ref> Culler and Pal-Singh in their 1998 book "Parallel Computer Architecture: A Hardware/Software Approach" mention: "The term SMP is widely used but causes a bit of confusion. [...] The more precise description of what is intended by SMP is a shared memory multiprocessor where the cost of accessing a memory ___location is the same for all processors; that is, it has uniform access costs when the access actually is to memory. If the ___location is cached, the access will be faster, but cache access times and memory access times are the same on all processors."<ref>{{cite book|isbn=978-1-55860-343-1|author1=David Culler|author-link1=David Culler|author2=Jaswinder Pal Singh|author3=Anoop Gupta|title=Parallel Computer Architecture: A Hardware/Software Approach|url=https://books.google.com/books?id=MHfHC4Wf3K0C&pg=PA32|page=47|year=1999|publisher=[[Morgan Kaufmann]]}}</ref>
 
SMP systems are ''[[multiprocessing#Processor coupling|tightly coupled multiprocessor]] systems'' with a pool of homogeneous processors running independently of each other. Each processor, executing different programs and working on different sets of data, has the capability of sharing common resources (memory, I/O device, interrupt system and so on) that are connected using a [[system bus]] or a [[crossbar switch|crossbar]].
essors may be interconnected using buses, [[crossbar switch]]es or on-chip mesh networks. The bottleneck in the scalability of SMP using buses or crossbar switches is the bandwidth and power consumption of the interconnect among the various processors, the memory, and the disk arrays. Mesh architectures avoid these bottlenecks, and provide nearly linear scalability to much higher processor counts at the sacrifice of programmability:
 
== Design ==
SMP systems have centralized [[Shared memory architecture|shared memory]] called ''main memory'' (MM) operating under a single [[operating system]] with two or more homogeneous processors. Usually each processor has an associated private high-speed memory known as [[cache memory]] (or cache) to speed up the main memory data access and to reduce the system bus traffic.
 
essorsProcessors may be interconnected using buses, [[crossbar switch]]es or on-chip mesh networks. The bottleneck in the scalability of SMP using buses or crossbar switches is the bandwidth and power consumption of the interconnect among the various processors, the memory, and the disk arrays. Mesh architectures avoid these bottlenecks, and provide nearly linear scalability to much higher processor counts at the sacrifice of programmability:
 
<blockquote>Serious programming challenges remain with this kind of architecture because it requires two distinct modes of programming; one for the CPUs themselves and one for the interconnect between the CPUs. A single programming language would have to be able to not only partition the workload, but also comprehend the memory locality, which is severe in a mesh-based architecture.<ref name="AutoMQ-1"/></blockquote>