Neural processing unit: Difference between revisions

Content deleted Content added
No edit summary
filling this out a bit more, would this qualify as content for an 'AI accelerator' article?
Line 1:
As of 2016, AI accelerators are an emerging class of [[microprocessor]] designed to accelerate [[artificial neural networks]], [[machine vision]] and other [[machine learning]] algorithms for robotics, internet of things and other data-intensive/sensor driven tasks. They are frequently [[manycore]] designs (mirroring the massively-parallel nature of biological neural networks).
 
They are distinct from [[GPU]]s which are commonly used for the same role in that they lack any [[fixed function unit]]s for graphics, and generally focus on lower precision arithmetic.
Other past example architectures such as the [[Cell microprocessor]] have exhibited attributes with significant overlap with AI accelerators (low precision arithmetic, dataflow architecture, throughput over latency). The [[PPU]] was another example of an attempt to fill the gap between [[CPU]] and GPU, however physics processing tends to require 32bit precision and up whilst much lower precision is optimal for AI.
 
As of 2016, vendors are pushing their own terms (in the hope that their designs will dominate, as happened with the worlds adoption of NVIdia's term [[GPU]] for "graphics accelerators" and there is no consensus on the boundary between these devices, however several examples clearly aim to fill this new space.