Uniform memory access: Difference between revisions

Content deleted Content added
m Reverting possible vandalism by 190.28.229.60 to version by 118.208.43.104. False positive? Report it. Thanks, ClueBot NG. (1165983) (Bot)
hUMA: Use a citation template.
 
(43 intermediate revisions by 35 users not shown)
Line 1:
{{Short description|Parallel computing memory architecture}}
'''Uniform Memory Access''' (UMA) is a [[shared memory]] architecture used in [[parallel computer]]s.
'''Uniform memory access''' ('''UMA''') is a [[shared-memory architecture]] used in [[parallel computer]]s. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with [[non-uniform memory access]] (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and [[time sharing]] applications by multiple users. It can be used to speed up the execution of a single large program in [[real-time computing|time-critical]] applications.<ref>{{cite book |title=Advanced Computer Architecture |author=Kai Hwang |isbn=0-07-113342-9}}</ref>
 
==Types of UMA architectures==
All the processors in the UMA model share the physical memory uniformly. In a UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data.
There are three types of UMA architectures:
#* UMA using bus-based [[Symmetricsymmetric Multi-Processingmultiprocessing]] (SMP) architectures;
#* UMA using [[crossbar switchesswitch]]es;
#* UMA using [[Multistagemultistage interconnection networks]].
 
==hUMA==
Uniform Memory Access computer architectures are often contrasted with [[Non-Uniform Memory Access]] (NUMA) architectures.
In April 2013, the term hUMA (''heterogeneous uniform memory access'') began to appear in [[AMD]] promotional material to refer to [[CPU]] and [[GPU]] sharing the same system memory via [[Cache coherence|cache coherent]] views. Advantages include an easier programming model and less copying of data between separate memory pools.<ref>{{cite web |author=Peter Bright |url=https://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/ |title=AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri |website=[[Ars Technica]] |date=April 30, 2013}}</ref>
 
In the UMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion, The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time critical applications.
 
<ref>Advanced Computer Architecture, Kai Hwang, ISBN 0-07-113342-9</ref>
 
==Types of UMA architectures==
 
# UMA using bus-based [[Symmetric Multi-Processing]] (SMP) architectures
# UMA using crossbar switches
# UMA using [[Multistage interconnection networks]]
 
==See also==
* [[Non-Uniform_Memory_Access|NUMAuniform memory access]]
* [[Cache-only memory architecture]]
* [[Heterogeneous System Architecture]]
 
== Notes References==
{{reflist}}
<references/>
 
{{Parallel computing}}
[[Category:Computer memory]]
[[Category:Parallel computing]]
 
 
{{compu-hardware-stub}}
 
[[ar:ذاكرة موحدة الوصول]]
[[de:Uniform Memory Access]]
[[es:Acceso uniforme a memoria]]
[[pl:CCNUMA]]
[[ru:Uniform Memory Access]]
[[zh:均匀访存模型]]