Hardware for artificial intelligence: Difference between revisions

Content deleted Content added
has cats
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(45 intermediate revisions by 34 users not shown)
Line 1:
{{Short description|Hardware specially designed and optimized for artificial intelligence}}
Specialized '''hardware for artificial intelligence''' is used to execute [[artificial intelligence]] programs faster, such as [[Lisp machine]]s, [[neuromorphic engineering]], [[event camera]]s, [[JAIC]], [[physical neural network]]s, or the [[memristor]].
{{Multiple issues|
{{Expert needed|artificial intelligence|reason=Needs attention from a current expert to incorporate modern developments in this area from the last few decades, including TPUs and better coverage of GPUs, and to clean up the other material and clarify how it relates to the subject|date=November 2021}}
{{Missing information|its scope: What is AI hardware for the purposes of this article? Event cameras are an application of neuromorphic design, but LISP machines are not an end use application. It previously mentioned [[memristor]]s, which are not specialized hardware for AI, but rather a basic electronic component, like resister, capacitor, or inductor|date=November 2021}}
{{Update|date=November 2021}}
}}
 
Specialized [[computer hardware]] is often used to execute [[artificial intelligence]] (AI) programs faster, and with less energy, such as [[Lisp machine]]s, [[neuromorphic engineering]], [[event camera]]s, and [[physical neural network]]s. Since 2017, several consumer grade [[Central processing unit|CPU]]s and [[system on a chip|SoC]]s have on-die [[AI accelerator|NPU]]s. As of 2023, the market for AI hardware is dominated by [[GPU]]s.<ref>{{cite news |title=Nvidia: The chip maker that became an AI superpower |url=https://www.bbc.com/news/business-65675027 |access-date=18 June 2023 |work=BBC News |date=25 May 2023}}</ref>
 
== Lisp machines ==
{{Main|Lisp machine}}
[[File:Scheda Audio.png|thumb|Computer hardware]]
{{summarize|section|brevity=y|date=October 2021}}
[[Lisp machinesmachine]]s were developed in the late 70s1970s and early 80s1980s to make AIArtificial intelligence programs written in the programming language [[Lisp (programming language)|Lisp]] to run faster.
 
==Dataflow architecture==
== Neural network machines ==
{{Main|AIDataflow acceleratorarchitecture}}
[[Dataflow architecture]] processors used for AI serve various purposes with varied implementations like the polymorphic dataflow<ref>{{Cite news |last=Maxfield |first=Max |date=24 December 2020 |title=Say Hello to Deep Vision's Polymorphic Dataflow Architecture |work=Electronic Engineering Journal |publisher=Techfocus media}}</ref> Convolution Engine<ref>{{cite web |url=https://kinara.ai/<!-- prior: https://deepvision.io/ --> |title=Kinara (formerly Deep Vision) |author=<!-- Unstated --> |date=2022 |website=Kinara |access-date=2022-12-11}}</ref> by Kinara (formerly Deep Vision), structure-driven dataflow by [[Hailo Technologies|Hailo]],<ref>{{cite web |url=https://hailo.ai/ |title=Hailo |author=<!-- Unstated --> |date=<!-- Undated --> |website=Hailo |access-date=2022-12-11}}</ref> and dataflow [[Scheduling (computing)|scheduling]] by [[Cerebras]].<ref>{{Cite report |last=Lie |first=Sean |date=29 August 2022 |url=https://www.cerebras.net/blog/cerebras-architecture-deep-dive-first-look-inside-the-hw/sw-co-design-for-deep-learning |title=Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning |website=Cerebras |archive-date=15 March 2024 |access-date=13 December 2022 |archive-url=https://web.archive.org/web/20240315033825/https://www.cerebras.net/blog/cerebras-architecture-deep-dive-first-look-inside-the-hw/sw-co-design-for-deep-learning |url-status=dead }}</ref>
 
==Component hardware==
Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.<ref>{{cite web|last1=Research|first1=AI|date=23 October 2015|title=Deep Neural Networks for Acoustic Modeling in Speech Recognition|url=http://airesearch.com/ai-research-papers/deep-neural-networks-for-acoustic-modeling-in-speech-recognition/|access-date=23 October 2015|website=airesearch.com}}</ref> By 2019, graphic processing units ([[GPU]]s), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.<ref>{{cite news|date=December 2019|title=GPUs Continue to Dominate the AI Accelerator Market for Now|language=en|work=InformationWeek|url=https://www.informationweek.com/big-data/ai-machine-learning/gpus-continue-to-dominate-the-ai-accelerator-market-for-now/a/d-id/1336475|access-date=11 June 2020}}</ref> [[OpenAI]] estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.<ref>{{cite news|last1=Ray|first1=Tiernan|date=2019|title=AI is changing the entire nature of compute|language=en|work=ZDNet|url=https://www.zdnet.com/article/ai-is-changing-the-entire-nature-of-compute/|access-date=11 June 2020}}</ref><ref>{{cite web|date=16 May 2018|title=AI and Compute|url=https://openai.com/blog/ai-and-compute/|access-date=11 June 2020|website=OpenAI|language=en}}</ref>
 
===AI Sources accelerators===
{{Main|AI accelerator}}
{{reflist}}
 
Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.<ref>{{cite web |last1=Research |first1=AI |date=23 October 2015 |title=Deep Neural Networks for Acoustic Modeling in Speech Recognition |url=http://airesearch.com/ai-research-papers/deep-neural-networks-for-acoustic-modeling-in-speech-recognition/ |website=AIresearch.com |access-date=23 October 2015|website=airesearch.com}}</ref> By 2019, graphic[[graphics processing units ([[GPUunit]]s (GPUs), often with AI-specific enhancements, had displaced [[central processing units]] (CPUs) as the dominant methodmeans ofto trainingtrain large-scale commercial cloud AI.<ref>{{cite news |datelast=DecemberKobielus 2019|titlefirst=GPUsJames Continue|date=27 toNovember Dominate2019 the AI Accelerator Market for Now|language=en|work=InformationWeek|url=https://www.informationweek.com/bigai-data/aior-machine-learning/gpus-continue-to-dominate-the-ai-accelerator-market-for-now/a/d-id/1336475 |title=GPUs Continue to Dominate the AI Accelerator Market for Now |work=InformationWeek |language=en |access-date=11 June 2020}}</ref> [[OpenAI]] estimated the hardware compute used in the largest deep learning projects from AlexNetAlex Net (2012) to AlphaZeroAlpha Zero (2017), and found a 300,000-fold increase in the amount of compute requiredneeded, with a doubling-time trendlinetrend of 3.4 months.<ref>{{cite news |last1last=RayTiernan |first1first=TiernanRay |date=2019 |title=AI is changing the entire nature of compute |language=en |work=ZDNet |url=https://www.zdnet.com/article/ai-is-changing-the-entire-nature-of-compute/ |access-date=11 June 2020}}</ref><ref>{{cite web |date=16 May 2018 |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |access-date=11 June 2020 |website=OpenAI |language=en}}</ref>
 
== Sources ==
{{comp-hardware-stub}}
{{Reflist}}
 
[[Category:Computer hardware]]