Uniform memory access: Difference between revisions

Content deleted Content added
m Subsection
hUMA: Use a citation template.
 
(37 intermediate revisions by 30 users not shown)
Line 1:
{{Short description|Parallel computing memory architecture}}
'''Uniform Memory Access''' (UMA) is a [[shared memory]] architecture used in [[parallel computer]]s.
In'''Uniform memory access''' ('''UMA''') is a [[shared-memory architecture]] used in [[parallel computer]]s. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data. Uniform memory access computer architectures are often contrasted with [[non-uniform memory access]] (NUMA) architectures. In the NUMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion,. The UMA model is suitable for general purpose and [[time sharing]] applications by multiple users. It can be used to speed up the execution of a single large program in [[real-time computing|time-critical]] applications.<ref>{{cite book |title=Advanced Computer Architecture, |author=Kai Hwang, ISBN |isbn=0-07-113342-9}}</ref>
 
==Types of UMA architectures==
All the processors in the UMA model share the physical memory uniformly. In a UMA architecture, access time to a memory ___location is independent of which processor makes the request or which memory chip contains the transferred data.
There are three types of UMA architectures:
 
#* UMA using bus-based [[Symmetricsymmetric Multi-Processingmultiprocessing]] (SMP) architectures;
Uniform Memory Access computer architectures are often contrasted with [[Non-Uniform Memory Access]] (NUMA) architectures.
#* UMA using [[crossbar switchesswitch]]es;
 
#* UMA using [[Multistagemultistage interconnection networks]].
In the UMA architecture, each processor may use a private cache. Peripherals are also shared in some fashion, The UMA model is suitable for general purpose and time sharing applications by multiple users. It can be used to speed up the execution of a single large program in time critical applications.<ref>Advanced Computer Architecture, Kai Hwang, ISBN 0-07-113342-9</ref>
 
==Types of UMA architectures==
 
# UMA using bus-based [[Symmetric Multi-Processing]] (SMP) architectures
# UMA using crossbar switches
# UMA using [[Multistage interconnection networks]]
 
==hUMA==
In April 2013, the term "hUMA" (for''heterogeneous heterogenousuniform Uniformmemory Memory Accessaccess'') began to appear in [[AMD]] promotional material to refer to [[CPU]] and [[GPU]] sharing the same system memory via [[Cache coherence|cache coherent]] views. Advantages include an easier programming model and less copying of data between separate memory pools.<ref>{{cite web |author=Peter Bright. [http|url=https://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/ |title=AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri], |website=[[Ars Technica,]] |date=April 30, 2013.}}</ref>
 
In April 2013, the term "hUMA" (for heterogenous Uniform Memory Access) began to appear in [[AMD]] promotional material to refer to [[CPU]] and [[GPU]] sharing the same system memory via [[Cache coherence|cache coherent]] views. Advantages include an easier programming model and less copying of data between separate memory pools.<ref>Peter Bright. [http://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/ AMD's "heterogeneous Uniform Memory Access" coming this year in Kaveri], Ars Technica, April 30, 2013.</ref>
 
==See also==
* [[Non-Uniform_Memory_Access|NUMAuniform memory access]]
* [[Cache-only_memory_architecture|COMAonly memory architecture]]
* [[Heterogeneous System Architecture]]
 
== Notes References==
{{reflist}}
<references/>
 
{{Parallel computing}}
[[Category:Computer memory]]
[[Category:Parallel computing]]
 
{{compu-hardware-stub}}