Content deleted Content added
mNo edit summary |
GreenC bot (talk | contribs) Rescued 1 archive link. Wayback Medic 2.5 per WP:URLREQ#anandtech.com |
||
(38 intermediate revisions by 27 users not shown) | |||
Line 1:
{{Short description|Computing system}}
'''Heterogeneous System Architecture''' ('''HSA''') is a cross-vendor set of specifications that allow for the integration of [[central processing unit]]s and [[GPU|graphics processors]] on the same bus, with shared [[Main memory|memory]] and [[Task (computing)|tasks]].<ref>{{cite web |url=http://www.tomshardware.com/news/AMD-HSA-hUMA-APU,22324.html |title=AMD Unveils its Heterogeneous Uniform Memory Access (hUMA) Technology |website=Tom's Hardware |author=Tarun Iyer |date=30 April 2013}}</ref> The HSA is being developed by the [[HSA Foundation]], which includes (among many others) [[Advanced Micro Devices|AMD]] and [[ARM Holdings|ARM]]. The platform's stated aim is to reduce [[communication latency]] between CPUs, GPUs and other [[compute device]]s, and make these various devices more compatible from a programmer's perspective,<ref name="whitepaper">{{Cite report |author=George Kyriazis |date=30 August 2012 |title=Heterogeneous System Architecture: A Technical Review |url=http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/hsa10.pdf |publisher=AMD |access-date=26 May 2014 |archive-url=https://web.archive.org/web/20140328140823/http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/hsa10.pdf |archive-date=28 March 2014 |url-status=dead }}</ref>{{rp|3}}<ref name="whatis">{{cite web |title=What is Heterogeneous System Architecture (HSA)? |url=http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/ |publisher=AMD |
CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.<ref>{{cite web|url=http://www.slideshare.net/mobile/linaroorg/hsa-linaro-updatejuly102013|title=LCE13: Heterogeneous System Architecture (HSA) on ARM|author=Linaro|work=slideshare.net|date=21 March 2014}}</ref> [[Heterogeneous computing]] is widely used in [[MPSoC|system-on-chip]] devices such as [[Tablet computer|tablets]], [[smartphone]]s, other mobile devices, and [[video game console]]s.<ref name="gpuscience">{{cite web
| url = http://gpuscience.com/cs/heterogeneous-system-architecture-purpose-and-outlook/
|
| title = Heterogeneous System Architecture: Purpose and Outlook
| date = 2012-11-09 |
|
| website = gpuscience.com
}}</ref> HSA allows programs to use the graphics processor for [[floating point]] calculations without separate memory or scheduling.<ref>{{cite web |title=Heterogeneous system architecture: Multicore image processing using a mix of CPU and GPU elements |website=Embedded Computing Design |url=http://embedded-computing.com/articles/heterogeneous-processing-using-mix-cpu-gpu-elements/ |
==
The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other
{{Gallery
Line 17 ⟶ 18:
| height = 190
| align = center
| File:HSA – using the GPU without HSA.svg
| Steps performed when offloading calculations to the [[Graphics processing unit|GPU]] on a non-HSA system
Line 26 ⟶ 25:
}}
Modern GPUs are very well suited to perform [[
== Overview ==▼
{{Refimprove section|date=May 2014}}▼
Originally introduced by [[embedded system]]s such as the [[Cell Broadband Engine]], sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units{{snd}} [[central processing unit]]s (CPUs), [[graphics processing unit]]s (GPUs), [[digital signal processor]]s (DSPs), or any type of [[application-specific integrated circuit]]s (ASICs). The system architecture allows any accelerator, for instance a [[GPU|graphics processor]], to operate at the same processing level as the system's CPU.
Among its main features, HSA defines a unified [[virtual address space]] for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share [[
So far, the HSA specifications cover:
* HSA Intermediate Layer (HSAIL), a [[p-code machine|virtual instruction set]] for parallel programs▼
===HSA Intermediate Layer===<!--incoming redirect-->
** similar{{according to whom|date=May 2015}} to [[LLVM Intermediate Representation]] and [[Standard Portable Intermediate Representation|SPIR]] (used by [[OpenCL]] and [[Vulkan (API)|Vulkan]])▼
▲
** finalized to a specific instruction set by a [[Just-in-time compilation|JIT compiler]]▼
▲
** make late decisions on which core(s) should run a task▼
** explicitly parallel▼
** supports exceptions, virtual functions and other high-level features▼
* HSA memory model▼
** compatible with [[C++11]], OpenCL, [[Java (programming language)|Java]] and [[.NET Framework|.NET]] memory models▼
** relaxed consistency▼
▲
** designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. [[C (programming language)|C]])▼
** will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in [[Fortran]], C++, [[C++ AMP]], Java, et al.▼
▲
* HSA dispatcher and run-time▼
▲
** designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing▼
** any core can schedule work for any other, including itself▼
** significant reduction of overhead of scheduling work for a core▼
▲
Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.<ref name="gpuscience" />
===
The
{{Gallery
Line 63 ⟶ 64:
| height = 190
| align = center
| File:Desktop computer bus bandwidths.svg
| Standard architecture with a discrete [[graphics card|GPU]] attached to the [[PCI Express]] bus. [[Zero-copy]] between the GPU and CPU is not possible due to distinct physical memories.
|File:HSA-enabled virtual memory with distinct graphics card.svg
| HSA brings unified virtual memory
| File:Integrated graphics with distinct memory allocation.svg
| In partitioned main memory, one part of the system memory is exclusively allocated to the GPU. As a result, zero-copy operation
| File:HSA-enabled integrated graphics.svg
| Unified main memory,
| File:MMU and IOMMU.svg
|
}}
{{Clear}}
==Software support{{Anchor|AMDKFD|HQ|HMM}}
[[File:Linux AMD graphics stack.svg|thumb
| url =
| title = AMDKFD Driver Still Evolving For Open-Source HSA On Linux
| date =
| author = Michael Larabel | publisher = [[Phoronix]]
}}</ref><ref name="kernelnewbies-3.19" />]]
Some of the HSA-specific features implemented in the hardware need to be supported by the [[operating system kernel]] and specific device drivers. For example, support for AMD [[Radeon]] and [[AMD FirePro]] graphics cards, and [[AMD Accelerated Processing Unit|APUs]] based on [[Graphics Core Next]] (GCN), was merged into version 3.19 of the [[Linux kernel mainline]], released on
| url = http://kernelnewbies.org/Linux_3.19#head-ae54e026ef7588f4431f7e94178d27d5cd830bbf
| title = Linux kernel 3.19, Section 1.3. HSA driver for AMD GPU devices
| date =
| website = kernelnewbies.org
}}</ref> Programs do not interact directly with {{Mono|amdkfd}}{{Explain|date=December 2023}}, but queue their jobs utilizing the HSA runtime.<ref>{{cite web
| url = https://github.com/HSAFoundation/HSA-Runtime-Reference-Source/blob/master/README.md
| title = HSA-Runtime-Reference-Source/README.md at master
| date =
| website = github.com
}}</ref> This very first implementation, known as {{Mono|amdkfd}}, focuses on [[AMD Accelerated Processing Unit#Steamroller architecture .282014.29: Kaveri|"Kaveri"]] or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.
Additionally, {{Mono|amdkfd}} supports ''heterogeneous queuing'' (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective.
Integrated support for HSA platforms has been announced for the "Sumatra" release of [[OpenJDK]], due in 2015.<ref>{{cite web |url=http://www.hpcwire.com/2013/08/26/hsa_foundation_aims_to_boost_javas_gpu_prowess/ |title=HSA Foundation Aims to Boost
[[AMD APP SDK]] is AMD's proprietary software development kit targeting [[parallel computing]], available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.<ref>{{cite web |url=https://github.com/HSA-Libraries/Bolt |title=Bolt on github|website=[[GitHub]]|date=11 January 2022}}</ref>
[[GPUOpen]] comprehends a couple of other software tools related to HSA. [[
{{Clear}}
==
{{As of|2015|2}}, only AMD's "Kaveri" A-series APUs (cf. [[List of AMD Accelerated Processing Unit microprocessors#"Kaveri" (2014, 28 nm)|"Kaveri" desktop processors]] and [[List of AMD Accelerated Processing Unit microprocessors#"Kaveri" 2014, 28 nm|"Kaveri" mobile processors]]) and Sony's [[PlayStation 4]] allowed the [[Graphics processing unit#Integrated graphics|integrated GPU]] to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.{{Citation needed|date=June 2016}}▼
▲=== AMD ===
▲{{As of|2015|2}}, only AMD's "Kaveri" A-series APUs (cf. [[List of AMD Accelerated Processing Unit microprocessors#"Kaveri" (2014, 28 nm)|"Kaveri" desktop processors]] and [[List of AMD Accelerated Processing Unit microprocessors#"Kaveri" 2014, 28 nm|"Kaveri" mobile processors]]) and Sony's [[PlayStation 4]] allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.{{Citation needed|date=June 2016}}
Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.{{Citation needed|date=June 2016}}
Line 122 ⟶ 120:
{{AMD APU features}}
===
ARM's [[Bifrost (microarchitecture)|Bifrost]] microarchitecture, as implemented in the Mali-G71,<ref>{{cite web |url=http://www.anandtech.com/show/10375/arm-unveils-bifrost-and-mali-g71/5 |archive-url=https://archive.today/20160910101608/http://www.anandtech.com/show/10375/arm-unveils-bifrost-and-mali-g71/5 |url-status=dead |archive-date=10 September 2016 |title=ARM Bifrost GPU Architecture |date=2016-05-30}}</ref> is fully compliant with the HSA 1.1 hardware specifications. {{As of|2016|6}}, ARM has not announced software support that would use this hardware feature.
== See also ==▼
{{Commons category}}▼
* [[General-purpose computing on graphics processing units]] (GPGPU)
* [[Non-Uniform Memory Access]] (NUMA)
Line 133 ⟶ 129:
* [[Shared memory]]
* [[Zero-copy]]
* A technique enabling zero-copy operation for a CPU and a parallel accelerator <ref> Computer memory architecture for hybrid serial and parallel computing systems, US patents 7,707,388, 2010 and 8,145,879, 2012. Inventor: [[Uzi Vishkin]] </ref>
==
{{Reflist|30em}}
==
▲{{Commons category}}
* {{YouTube|id=ln8JpfaLvbM|title=HSA Heterogeneous System Architecture Overview}} by Vinod Tipparaju at [[ACM/IEEE Supercomputing Conference|SC13]] in November 2013
* [https://web.archive.org/web/20160514070602/http://www.mpsoc-forum.org/previous/2013/slides/8-Hegde.pdf HSA and the software ecosystem]
* [http://www-conf.slac.stanford.edu/xldb2012/talks/xldb2012_wed_1400_MichaelHouston.pdf 2012 – HSA by Michael Houston] {{Webarchive|url=https://web.archive.org/web/20160305141652/http://www-conf.slac.stanford.edu/xldb2012/talks/xldb2012_wed_1400_MichaelHouston.pdf |date=5 March 2016 }}
{{Use dmy dates|date=July 2019}}
[[Category:Heterogeneous System Architecture| ]]
|