OneAPI (compute acceleration): Difference between revisions

Content deleted Content added
 
(15 intermediate revisions by 9 users not shown)
Line 1:
{{lowercase title}}
{{short description|Open standard for parallel computing}}
{{lowercase title}}
{{otheruses|OneAPI (disambiguation)}}
{{Infobox software
| name = oneAPI
Line 12 ⟶ 13:
| website = {{official URL}}
}}
 
{{otheruses|OneAPI (disambiguation)}}
'''oneAPI''' is an [[open standard]], adopted by Intel,{{sfn|Fortenberry|Tomov|2022|p=22}} for a unified [[application programming interface]] (API) intended to be used across different computing [[Hardware acceleration|accelerator]] ([[coprocessor]]) architectures, including [[GPU]]s, [[AI accelerator]]s and [[field-programmable gate array]]s. It is intended to eliminate the need for developers to maintain separate code bases, multiple programming languages, tools, and workflows for each architecture.<ref>{{Cite web|url=https://www.hpcwire.com/2019/12/09/intel-expands-its-silicon-portfolio-and-oneapi-software-initiative-for-next-generation-hpc/|title=Intel Expands its Silicon Portfolio, and oneAPI Software Initiative for Next-Generation HPC|date=2019-12-09|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.hpcwire.com/2019/11/17/intel-debuts-new-gpu-ponte-vecchio-and-outlines-aspirations-for-oneapi/|title=Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI|date=2019-11-18|website=HPCwire|language=en-US|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.extremetech.com/computing/302284-sc19-intel-unveils-new-gpu-stack-oneapi-development-effort|title=SC19: Intel Unveils New GPU Stack, oneAPI Development Effort - ExtremeTech|website=www.extremetech.com|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.servethehome.com/intel-one-api-to-rule-them-all-is-much-needed/|title=Intel One API to Rule Them All Is Much Needed to Expand TAM|last=Kennedy|first=Patrick|date=2018-12-24|website=ServeTheHome|language=en-US|access-date=2020-02-11}}</ref>
 
oneAPI competes with other GPU computing stacks: [[CUDA|CUDA]] by NVIDIA[[Nvidia]] and [[ROCm|ROCm]] by [[AMD]].
 
== Specification ==
Line 63 ⟶ 66:
 
== Hardware abstraction layer ==
oneAPI Level Zero,<ref>{{Cite web|url=https://www.tomshardware.com/news/intel-releases-bare-metal-oneapi-level-zero-specification|title=Intel Releases Bare-Metal oneAPI Level Zero Specification|last=Verheyde 2019-12-08T16:11:19Z|first=Arne|website=Tom's Hardware|date=8 December 2019 |language=en|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Intel-oneAPI-Level-Zero|title=Intel's Compute Runtime Adds oneAPI Level Zero Support - Phoronix|website=www.phoronix.com|access-date=2020-03-10}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=article&item=intel-level-zero&num=1|title=Initial Benchmarks With Intel oneAPI Level Zero Performance - Phoronix|website=www.phoronix.com|access-date=2020-04-13}}</ref> the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.
 
== Implementations ==
Line 72 ⟶ 75:
[[Heidelberg University|University of Heidelberg]] has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.<ref>{{Cite web|last=Salter|first=Jim|date=2020-09-30|title=Intel, Heidelberg University team up to bring Radeon GPU support to AI|url=https://arstechnica.com/gadgets/2020/09/intel-heidelberg-university-team-up-to-bring-radeon-gpu-support-to-ai/|access-date=2021-10-07|website=Ars Technica|language=en-us}}</ref>
 
[[Huawei]] released a DPC++ compiler for their Ascend AI Chipset<ref>{{Citation|title=Extending DPC++ with Support for Huawei Ascend AI Chipset| date=27 April 2021 |url=https://www.youtube.com/watch?v=7foee4_QkbU|language=en|access-date=2021-10-07}}</ref>
 
[[Fujitsu]] has created an open-source [[ARM architecture|ARM]] version of the oneAPI Deep Neural Network Library (oneDNN)<ref>{{Cite web|last=fltech|date= |title=A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words|url=https://blog.fltech.dev/entry/2020/11/19/fugaku-onednn-deep-dive-en|access-date=2021-02-10|website=fltech - 富士通研究所の技術ブログ|language=ja}}</ref> for their [[Fugaku (supercomputer)|Fugaku CPU]].
 
== Comparison with competitors ==
oneAPI competes with other GPU computing stacks: [CUDA|CUDA by NVIDIA]] and [[ROCm|ROCm by AMD]].
 
Where as Nvidia's CUDA is closed-source, AMD ROCm and Intel's OneAPI are open source.
 
=== Nvidia CUDA ===
{{Main|CUDA}}
 
'''CUDA (Compute Unified Device Architecture)''' is closed source is a [[parallel computing]] platform and [[application programming interface]] (API) that allows software to use certain types of [[graphics processing units]] (GPUs) from [[Nvidia]] for accelerated general-purpose processing, an approach called general-purpose computing on GPUs ([[GPGPU]]).
 
=== AMD ROCm ===
{{Main|ROCm}}
 
[[Fujitsu]] has created an open-source [[ARM architecture|ARM]] version of the oneAPI Deep Neural Network Library (oneDNN)<ref>{{Cite web|last=fltech|date= 19 November 2020|title=A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words|url=https://blog.fltech.dev/entry/2020/11/19/fugaku-onednn-deep-dive-en|access-date=2021-02-10|website=fltech - 富士通研究所の技術ブログ|language=ja}}</ref> for their [[Fugaku (supercomputer)|Fugaku CPU]].
'''ROCm'''<ref>{{Cite web|url=https://github.com/RadeonOpenCompute/ROCm/issues/1628|title=Question: What does ROCm stand for? · Issue #1628 · RadeonOpenCompute/ROCm|website=Github.com|access-date=January 18, 2022}}</ref> is an open source software stack for [[graphics processing unit]] (GPU) programming from [[Advanced Micro Devices]] (AMD).
 
=== Unified Acceleration Foundation (UXL) =and the future for oneAPI{{anchor|UXL}} ==
 
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, and related open standards withand thespecification aimprojects tothrough Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.<ref>{{Cite web |title=Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software |website=[[Reuters]] |url=https://www.reuters.com/technology/behind-plot-break-nvidias-grip-ai-by-targeting-software-2024-03-25/ |access-date=2024-04-05}}</ref>
 
==References==
Line 102 ⟶ 90:
 
== External links ==
* {{official website |name=oneAPI Industry Specification}}
* [https://software.intel.com/en-us/{{GitHub|oneapi Intel -src|oneAPI Product]}}
* [https://www.codeplay.com/portal/12-16-19-bringing-nvidia-gpu-support-to-sycl-developers Bringing Nvidia GPU support to SYCL developers]
* {{cite book |display-authors= 1 |first1= James |last1= Reinders |first2= Ben |last2= Ashbaugh |first3= James |last3= Brodman |first4= Michael |last4= Kinsner |first5= John |last5= Pennycook |first6= Xinmin |last6= Tian |url= https://link.springer.com/book/10.1007/978-1-4842-5574-2 |title= Data Parallel C++: Mastering DPC++ for Programming of Heterogeneous Systems using C++ and SYCL |publisher= Springer |isbn= 978-1-4842-5574-2 |doi= 10.1007/978-1-4842-5574-2 |series= Open Access Book |year= 2021 |s2cid= 226231933 }}
* [https://developer.codeplay.com/products/oneapi/nvidia/2025.1.0/guides/index oneAPI for NVIDIA GPUs 2025.1.0]
* {{GitHub|oneapi-src|oneapi-src}}
* [https://developer.codeplay.com/products/oneapi/amd/2025.1.0/guides/index oneAPI for AMD GPUs 2025.1.0]
 
[[Category:Application programming interfaces]]
[[Category:Cross-platform software]]
[[Category:Intel software]]