Heterogeneous System Architecture: Difference between revisions

Content deleted Content added
Overview: Wording improvement
m Overview: Copyedit, fixing duplicated words: space using AWB
Line 33:
Originally introduced by [[embedded system]]s such as the [[Cell Broadband Engine]], sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units{{snd}} [[central processing unit]]s (CPUs), [[graphics processing unit]]s (GPUs), [[digital signal processor]]s (DSPs), or any type of [[application-specific integrated circuit]]s (ASICs). The system architecture allows any accelerator, for instance a [[GPU|graphics processor]], to operate at the same processing level as the system's CPU.
 
Among its main features, HSA defines a unified [[virtual address space]] space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share [[Page (computer memory)|page tables]] so that devices can exchange data by sharing [[Pointer (computer programming)|pointers]]. This is to be supported by custom [[memory management unit]]s.<ref name="whitepaper"/>{{rp|6–7}} To render interoperability possible and also to ease various aspects of programming, HSA is intended to be [[Instruction set|ISA]]-agnostic for both CPUs and accelerators, and to support high-level programming languages.
 
So far, the HSA specifications cover: