Neural processing unit: Difference between revisions

Content deleted Content added
Line 27:
Microsoft have used [[FPGA]] chips to accelerate inference.
<ref>{{cite web|title=microsoft extends fpga reach from bing to deep learning|url=http://www.nextplatform.com/2015/08/27/microsoft-extends-fpga-reach-from-bing-to-deep-learning/}}</ref>
<ref>{{cite web|title=Accelerating Deep Convolutional Neural Networks Using Specialized Hardware|url=http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.pdf}}</ref>
This has motivated [[intel]] to purchase [[altera]] with the aim of integrating FPGAs in server CPUs, which would be capable of accelerating AI as well as other tasks.