Neural processing unit: Difference between revisions

Content deleted Content added
Line 11:
Vendors of graphics processing units saw the opportunity and generalised their pipelines with specific support for [[GPGPU]]
<ref>{{cite web|title=nvidia tesla microarchitecture|url=http://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/Arch/gpu.pdf}}</ref>
(which killed off the market for a dedicated physics accelerator, and superseded Cell in [[video game console]]s
<ref>{{cite web|title=End of the line for IBM’s Cell|url=http://arstechnica.com/gadgets/2009/11/end-of-the-line-for-ibms-cell/}}</ref>
), and led to their use in running [[convolutional neural network]]s such as [[AlexNet]]. As such, as of 2016 most AI work is done on these. However at least a factor of 10 in efficiency<ref>{{cite web|title=google boosts machine learning with TPU|url=http://techreport.com/news/30155/google-boosts-machine-learning-with-its-tensor-processing-unit}}mentions 10x efficiency</ref>
can still be gained with a more specific design. The [[memory access pattern]] of AI calculations differs from graphics, with more a more predictable but deeper [[dataflow]] ,rather than 'gather' from texture-maps & 'scatter' to frame buffers.