OneAPI (compute acceleration): Difference between revisions

Content deleted Content added
Fix typo
Citation bot (talk | contribs)
Altered date. Add: website, date. | Use this bot. Report bugs. | Suggested by Whoop whoop pull up | Category:Application programming interfaces | #UCB_Category 167/232
Line 66:
 
== Hardware abstraction layer ==
oneAPI Level Zero,<ref>{{Cite web|url=https://www.tomshardware.com/news/intel-releases-bare-metal-oneapi-level-zero-specification|title=Intel Releases Bare-Metal oneAPI Level Zero Specification|last=Verheyde 2019-12-08T16:11:19Z|first=Arne|website=Tom's Hardware|date=8 December 2019 |language=en|access-date=2020-02-11}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=news_item&px=Intel-oneAPI-Level-Zero|title=Intel's Compute Runtime Adds oneAPI Level Zero Support - Phoronix|website=www.phoronix.com|access-date=2020-03-10}}</ref><ref>{{Cite web|url=https://www.phoronix.com/scan.php?page=article&item=intel-level-zero&num=1|title=Initial Benchmarks With Intel oneAPI Level Zero Performance - Phoronix|website=www.phoronix.com|access-date=2020-04-13}}</ref> the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.
 
== Implementations ==
Line 75:
[[Heidelberg University|University of Heidelberg]] has developed a SYCL/DPC++ implementation for both AMD and Nvidia GPUs.<ref>{{Cite web|last=Salter|first=Jim|date=2020-09-30|title=Intel, Heidelberg University team up to bring Radeon GPU support to AI|url=https://arstechnica.com/gadgets/2020/09/intel-heidelberg-university-team-up-to-bring-radeon-gpu-support-to-ai/|access-date=2021-10-07|website=Ars Technica|language=en-us}}</ref>
 
[[Huawei]] released a DPC++ compiler for their Ascend AI Chipset<ref>{{Citation|title=Extending DPC++ with Support for Huawei Ascend AI Chipset| date=27 April 2021 |url=https://www.youtube.com/watch?v=7foee4_QkbU|language=en|access-date=2021-10-07}}</ref>
 
[[Fujitsu]] has created an open-source [[ARM architecture|ARM]] version of the oneAPI Deep Neural Network Library (oneDNN)<ref>{{Cite web|last=fltech|date= 19 November 2020|title=A Deep Dive into a Deep Learning Library for the A64FX Fugaku CPU - The Development Story in the Developer's Own Words|url=https://blog.fltech.dev/entry/2020/11/19/fugaku-onednn-deep-dive-en|access-date=2021-02-10|website=fltech - 富士通研究所の技術ブログ|language=ja}}</ref> for their [[Fugaku (supercomputer)|Fugaku CPU]].
 
== Unified Acceleration Foundation (UXL) and the future for oneAPI{{anchor|UXL}} ==
 
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the contiuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.<ref>{{Cite web |title=Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software |website=[[Reuters]] |url=https://www.reuters.com/technology/behind-plot-break-nvidias-grip-ai-by-targeting-software-2024-03-25/ |access-date=2024-04-05}}</ref>
 
==References==