Neural processing unit: Difference between revisions

Content deleted Content added
Line 5:
=== History ===
 
Other past example architectures such as the [[Cell microprocessor]] have exhibited attributes with significant overlap with AI accelerators (support for packed low precision arithmetic, dataflow architecture, throughput over latency). One or more [[DSP]]s have been used as high volume neural network accelerators. The [[Physics processing unit]] was another example of an attempt to fill the gap between [[CPU]] and GPU, however physics tends to require 32bit precision and up, whilst much lower precision is optimal for AI.
 
As of 2016, vendors are pushing their own terms, in the hope that their designs and APIs will dominate. In the past after [[graphics accelerator]]s emerged, the industry eventually adopted [[NVidia]]s self assigned term "[[GPU]]" as the collective noun for "graphics accelerators", which had settled on an overall pipeline patterned around [[Direct3D]]. There is no consensus on the boundary between these devices, nor the exact form they will take, however several examples clearly aim to fill this new space.