Other past example architectures such as the [[Cell microprocessor]] have exhibited attributesfeatures with significantsignificantly overlap with AI accelerators (support for packed low precision arithmetic, dataflow architecture, throughput over latency). One or more [[DSP]]s have also been used as neural network accelerators. The [[Physics processing unit]] was yet another example of an attempt to fill the gap between [[CPU]] and GPU in PC hardware, however physics tends to require 32bit precision and up, whilst much lower precision is optimal for AI.
As of 2016, vendors are pushing their own terms, in the hope that their designs and APIs[[API]]s will dominate. In the past after [[graphics accelerator]]s emerged, the industry eventually adopted [[NVidia]]s self assigned term "[[GPU]]" as the collective noun for "graphics accelerators", which had settled on an overall pipeline patterned around [[Direct3D]]. There is no consensus on the boundary between these devices, nor the exact form they will take, however several examples clearly aim to fill this new space.