Uniform memory access: Difference between revisions

Content deleted Content added
m robot Adding: pl:CCNUMA
hUMA: Use a citation template.
 
(65 intermediate revisions by 55 users not shown)
Line 1:
{{Short description|Parallel computing memory architecture}}
'''Uniform Memory Access''' (UMA) is a [[shared memory]] architecture used in [[parallel computer]]s.
In'''Uniform memory access''' ('''UMA''') is a [[shared-memory architecture]] used in [[parallel computer]]s. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with [[non-uniform memory access]] (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion,. The UMA model is suitable for general purpose and [[time sharing]] applications by multiple users. It can be used to speed up the execution of a single large program in [[real-time computing|time-critical]] applications.<ref>{{cite book |title=Advanced Computer Architecture, |author=Kai Hwang, ISBN |isbn=0-07-113342-9}}</ref>
 
==Types of architectures==
All the processors in the UMA model share the physical memory uniformly. In a UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data.
There are three types of UMA architectures:
* UMA using bus-based [[symmetric multiprocessing]] (SMP) architectures;
* UMA using [[crossbar switch]]es;
* UMA using [[multistage interconnection networks]].
 
==hUMA==
Uniform Memory Access computer architectures are often contrasted with [[Non-Uniform Memory Access]] (NUMA) architectures.
In April 2013, the term hUMA (''heterogeneous uniform memory access'') began to appear in [[AMD]] promotional material to refer to [[CPU]] and [[GPU]] sharing the same system memory via [[Cache coherence|cache coherent]] views. Advantages include an easier programming model and less copying of data between separate memory pools.<ref>{{cite web |author=Peter Bright |url=https://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/ |title=AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri |website=[[Ars Technica]] |date=April 30, 2013}}</ref>
 
==See also==
In the UMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion, The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time critical applications.<ref>Advanced Computer Architecture, Kai Hwang, ISBN 0-07-113342-9</ref>
* [[Non-uniform memory access]]
* [[Cache-only memory architecture]]
* [[Heterogeneous System Architecture]]
 
==References==
{{reflist}}
 
{{Parallel computing}}
[[Category:Computer memory]]
[[Category:Parallel computing]]
 
{{compu-hardware-stub}}
 
== Notes ==
<references/>
 
[[de:Uniform Memory Access]]
[[pl:CCNUMA]]
[[ru:UMA]]
[[zh:均匀访存模型]]