Content deleted Content added
No edit summary |
No edit summary |
||
Line 2:
They are distinct from [[GPU]]s which are commonly used for the same role in that they lack any [[fixed function unit]]s for graphics, and generally focus on lower precision arithmetic.
Other past example architectures such as the [[Cell microprocessor]] have exhibited attributes with significant overlap with AI accelerators (support for packed low precision arithmetic, dataflow architecture, throughput over latency). The [[Physics processing unit]] was another example of an attempt to fill the gap between [[CPU]] and GPU, however physics tends to require 32bit precision and up, whilst much lower precision is optimal for AI.▼
=== History ===
▲Other past example architectures such as the [[Cell microprocessor]] have exhibited attributes with significant overlap with AI accelerators (support for packed low precision arithmetic, dataflow architecture, throughput over latency). One or more [[DSP]]s have been used as high volume neural network accelerators. The [[Physics processing unit]] was another example of an attempt to fill the gap between [[CPU]] and GPU, however physics tends to require 32bit precision and up, whilst much lower precision is optimal for AI.
As of 2016, vendors are pushing their own terms, in the hope that their designs will dominate. In the past after [[graphics accelerator]]s emerged, the industry eventually adopted [[NVidia]]s self assigned term "[[GPU]]" as the collective noun for "graphics accelerators". There is no consensus on the boundary between these devices, nor the exact form they will take, however several examples clearly aim to fill this new space.
|