Content deleted Content added
m Reverted edits by 96.40.57.55 (talk): unexplained content removal (HG) (3.4.12) |
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5 |
||
(13 intermediate revisions by 11 users not shown) | |||
Line 1:
{{Short description|Hardware specially designed and optimized for artificial intelligence}}
{{Multiple issues|
{{Expert needed|
{{Missing information|its scope: What is AI hardware for the purposes of this article? Event cameras are an application of neuromorphic design, but LISP machines are not an end use application. It previously mentioned [[memristor]]s, which are not specialized hardware for AI, but rather a basic electronic component, like resister, capacitor, or inductor|date=November 2021}}
{{Update|date=November 2021}}
}}
Specialized [[computer hardware]] is often used to execute [[artificial intelligence]] (AI) programs faster, and with less energy, such as [[Lisp machine]]s, [[neuromorphic engineering]], [[event camera]]s, and [[physical neural network]]s. Since 2017, several consumer grade [[Central processing unit|CPU]]s and [[system on a chip|SoC]]s have on-die [[AI accelerator|NPU]]s. As of 2023, the market for AI hardware is dominated by [[GPU]]s.<ref>{{cite news |title=Nvidia: The chip maker that became an AI superpower |url=https://www.bbc.com/news/business-65675027 |access-date=18 June 2023 |work=BBC News |date=25 May 2023}}</ref>
== Lisp machines ==
Line 15:
==Dataflow architecture==
{{Main|Dataflow architecture}}
[[Dataflow architecture]] processors used for AI serve various purposes
==Component hardware==
Line 23:
Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.<ref>{{cite web |last1=Research |first1=AI |date=23 October 2015 |title=Deep Neural Networks for Acoustic Modeling in Speech Recognition |url=http://airesearch.com/ai-research-papers/deep-neural-networks-for-acoustic-modeling-in-speech-recognition/ |website=AIresearch.com |access-date=23 October 2015}}</ref> By 2019, [[graphics processing unit]]s (GPUs), often with AI-specific enhancements, had displaced [[central processing units]] (CPUs) as the dominant means to train large-scale commercial cloud AI.<ref>{{cite news |last=Kobielus |first=James |date=27 November 2019 |url=https://www.informationweek.com/ai-or-machine-learning/gpus-continue-to-dominate-the-ai-accelerator-market-for-now |title=GPUs Continue to Dominate the AI Accelerator Market for Now |work=InformationWeek |language=en |access-date=11 June 2020}}</ref> [[OpenAI]] estimated the hardware compute used in the largest deep learning projects from Alex Net (2012) to Alpha Zero (2017), and found a 300,000-fold increase in the amount of compute needed, with a doubling-time trend of 3.4 months.<ref>{{cite news |last=Tiernan |first=Ray |date=2019 |title=AI is changing the entire nature of compute |language=en |work=ZDNet |url=https://www.zdnet.com/article/ai-is-changing-the-entire-nature-of-compute/ |access-date=11 June 2020}}</ref><ref>{{cite web |date=16 May 2018 |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |access-date=11 June 2020 |website=OpenAI |language=en}}</ref>
== Sources ==
|