Uniform memory access: Difference between revisions

Content deleted Content added
Undid revision 203228280 by 70.112.27.32 (talk) what's wrong?
hUMA: Use a citation template.
 
(76 intermediate revisions by 64 users not shown)
Line 1:
{{Short description|Parallel computing memory architecture}}
'''Uniform Memory Access''' (UMA) is a [[shared memory]] architecture used in [[parallel computer]]s.
'''Uniform memory access''' ('''UMA''') is a [[shared-memory architecture]] used in [[parallel computer]]s. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with [[non-uniform memory access]] (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion. The UMA model is suitable for general purpose and [[time sharing]] applications by multiple users. It can be used to speed up the execution of a single large program in [[real-time computing|time-critical]] applications.<ref>{{cite book |title=Advanced Computer Architecture |author=Kai Hwang |isbn=0-07-113342-9}}</ref>
 
==Types of architectures==
All the processors in the UMA model share the physical memory uniformly. In a UMA architecture, access time to a memory ___location is independent on which processor makes the request or which memory chip contains the transferred data.
There are three types of UMA architectures:
* UMA using bus-based [[symmetric multiprocessing]] (SMP) architectures;
* UMA using [[crossbar switch]]es;
* UMA using [[multistage interconnection networks]].
 
==hUMA==
Uniform Memory Access computer architectures are often contrasted with [[Non-Uniform Memory Access]] (NUMA) architectures.
In April 2013, the term hUMA (''heterogeneous uniform memory access'') began to appear in [[AMD]] promotional material to refer to [[CPU]] and [[GPU]] sharing the same system memory via [[Cache coherence|cache coherent]] views. Advantages include an easier programming model and less copying of data between separate memory pools.<ref>{{cite web |author=Peter Bright |url=https://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/ |title=AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri |website=[[Ars Technica]] |date=April 30, 2013}}</ref>
 
==See also==
[[Category:Computer memory]]
* [[Non-uniform memory access]]
[[Category:Parallel computing]]
* [[Cache-only memory architecture]]
* [[Heterogeneous System Architecture]]
 
==References==
{{computer-stub}}
{{reflist}}
 
{{Parallel computing}}
[[de:Uniform Memory Access]]
[[Category:Computer memory]]
[[zh:均匀访存模型]]
[[Category:Parallel computing]]