Neural processing unit

This is an old revision of this page, as edited by Fmadd (talk | contribs) at 03:46, 17 June 2016 (History). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

As of 2016, AI accelerators are an emerging class of microprocessor designed to accelerate artificial neural networks, machine vision and other machine learning algorithms for robotics, internet of things and other data-intensive/sensor driven tasks. They are frequently manycore designs (mirroring the massively-parallel nature of biological neural networks). They are targeted at practical Narrow AI applications, rather AGI research.

They are distinct from GPUs which are commonly used for the same role in that they lack any fixed function units for graphics, and generally focus on lower precision arithmetic.

History

One or more DSPs have been used as neural network accelerators[1]. Other architectures such as the Cell microprocessor (itself inspired by the PS2 vector units with it's pair of vector accelerators, one for graphics and the other tied more closely to the CPU for general purpose work) have exhibited features significantly overlap with AI accelerators (support for packed low precision arithmetic, dataflow architecture, throughput over latency). The Physics processing unit was yet another example of an attempt to fill the gap between CPU and GPU in PC hardware, however physics tends to require 32bit precision and up, whilst much lower precision can be a better tradeoff for AI. [2]

After innovative software appeared using vertex and pixel shaders for general purpose computation through graphics APIs (storing non image data in vertex and image arrays), [3] vendors of graphics processing units saw the opportunity and generalised their shader pipelines with specific support for GPGPU [4] (which killed off the market for a dedicated physics accelerator, and superseded Cell in video game consoles [5] ), and led to their use in running convolutional neural networks such as AlexNet. [6] As such, as of 2016 most AI work is done on these. However at least a factor of 10 in efficiency[7] can still be gained with a more specific design. The memory access pattern of AI calculations differs from graphics, with more a more predictable but deeper dataflow ,rather than 'gather' from texture-maps & 'scatter' to frame buffers.

As of 2016, vendors are pushing their own terms, in the hope that their designs and APIs will dominate. In the past after graphics accelerators emerged, the industry eventually adopted NVidias self assigned term "GPU" as the collective noun for "graphics accelerators", which had settled on an overall pipeline patterned around Direct3D. There is no consensus on the boundary between these devices, nor the exact form they will take, however several examples clearly aim to fill this new space.

Examples

  • SpiNNaker, a many-core design coming traditional ARM cores with an enhanced network fabric design specialised for simulating a large neural network.
  • TrueNorth The most unconventional example, a manycore design based on spiking neurons rather than traditional arithmetic. Frequency of pulses represents signal intensity. As of 2016 there is no consensus amongst AI researchers if this is the right way to go,[8]but some results are promising, with large energy savings demonstrated for vision tasks.
  • Zeroth NPU a design by Qualcom aimed squarely at bringing speech and image recognition capabilities to mobile devices.

References

  1. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator".
  2. ^ ""Deep Learning with Limited Numerical Precision"" (PDF).
  3. ^ "how the gpu came to be used for general computation".
  4. ^ "nvidia tesla microarchitecture" (PDF).
  5. ^ "End of the line for IBM's Cell".
  6. ^ "imagenet classification with deep convolutional neural networks" (PDF).
  7. ^ "google boosts machine learning with TPU".mentions 10x efficiency
  8. ^ "yann lecun on IBM truenorth".