Neural processing unit: Difference between revisions

Content deleted Content added
delete section that is not what the topic is about
this is not about NPUs
Line 62:
 
All models of Intel [[Meteor Lake]] processors have a ''Versatile Processor Unit'' (''VPU'') built-in for accelerating [[statistical inference|inference]] for computer vision and deep learning.<ref>{{Cite web|url=https://www.pcmag.com/news/intel-to-bring-a-vpu-processor-unit-to-14th-gen-meteor-lake-chips|title=Intel to Bring a 'VPU' Processor Unit to 14th Gen Meteor Lake Chips|website=PCMAG|date=August 2022 }}</ref>
 
== Deep learning processors (DLPs) ==
 
Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. At ISCA 2016, three sessions (15%) of the accepted papers, focused on architecture designs about deep learning. Such efforts include Eyeriss (MIT),<ref name=":5">{{Cite journal|last1=Chen|first1=Yu-Hsin|last2=Emer|first2=Joel|last3=Sze|first3=Vivienne|author3-link=Vivienne Sze|date=2017|title=Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks|journal=IEEE Micro|pages=1|doi=10.1109/mm.2017.265085944|issn=0272-1732|hdl=1721.1/102369|hdl-access=free}}</ref> EIE (Stanford),<ref name=":6">{{Cite book|last1=Han|first1=Song|title=EIE: Efficient Inference Engine on Compressed Deep Neural Network| last2=Liu|first2=Xingyu|last3=Mao|first3=Huizi|last4=Pu|first4=Jing|last5=Pedram|first5=Ardavan|last6=Horowitz|first6=Mark A.|last7=Dally|first7=William J.|date=2016-02-03|oclc=1106232247}}</ref> Minerva (Harvard),<ref>{{Cite book|last1=Reagen|first1=Brandon|last2=Whatmough|first2=Paul|last3=Adolf|first3=Robert|last4=Rama|first4=Saketh|last5=Lee|first5=Hyunkwang|last6=Lee|first6=Sae Kyu|last7=Hernandez-Lobato|first7=Jose Miguel|last8=Wei|first8=Gu-Yeon|last9=Brooks|first9=David|title=2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) |chapter=Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators |date=June 2016|___location=Seoul|publisher=IEEE|pages=267–278|doi=10.1109/ISCA.2016.32|isbn=978-1-4673-8947-1}}</ref> Stripes (University of Toronto) in academia,<ref>{{Cite journal|last1=Judd|first1=Patrick|last2=Albericio|first2=Jorge|last3=Moshovos|first3=Andreas|date=2017-01-01|title=Stripes: Bit-Serial Deep Neural Network Computing|journal=IEEE Computer Architecture Letters|volume=16|issue=1|pages=80–83|doi=10.1109/lca.2016.2597140|s2cid=3784424|issn=1556-6056}}</ref> TPU (Google),<ref name=":0">{{cite book| title=In-Datacenter Performance Analysis of a Tensor Processing Unit| author1=Jouppi, N.| author2=Young, C.| author3=Patil, N.| author4=Patterson, D.| publisher=[[Association for Computing Machinery]]| pages=1–12| date=24 June 2017| doi=10.1145/3079856.3080246| s2cid=4202768| doi-access=free| isbn=9781450348928}}</ref> and MLU ([[Cambricon]]) in industry.<ref>{{Cite web|title=MLU 100 intelligence accelerator card|url=https://www.cambricon.com/index.php?m=content&c=index&a=lists&catid=21| publisher=Cambricon| language=Japanese| date=2024| access-date=8 January 2024}}</ref> We listed several representative works in Table 1.
 
{| class="wikitable"
! colspan="8" |Table 1. Typical DLPs
|-
!Year
!DLPs
!Institution
!Type
!Computation
!Memory Hierarchy
!Control
!Peak Performance
|-
| rowspan="2" |2014
|DianNao<ref name=":1" />
|ICT, CAS
|digital
|vector [[Multiply–accumulate operation|MACs]]
|scratchpad
|[[very long instruction word|VLIW]]
|452 Gops (16-bit)
|-
|DaDianNao<ref name=":2" />
|ICT, CAS
|digital
|vector MACs
|scratchpad
|VLIW
|5.58 Tops (16-bit)
|-
| rowspan="2" |2015
|ShiDianNao<ref name=":3" />
|ICT, CAS
|digital
|scalar MACs
|scratchpad
|VLIW
|194 Gops (16-bit)
|-
|PuDianNao<ref name=":4" />
|ICT, CAS
|digital
|vector MACs
|scratchpad
|VLIW
|1,056 Gops (16-bit)
|-
| rowspan="5" |2016
|DnnWeaver
|Georgia Tech
|digital
|Vector MACs
|scratchpad
| -
| -
|-
|EIE<ref name=":6" />
|Stanford
|digital
|scalar MACs
|scratchpad
| -
|102 Gops (16-bit)
|-
|Eyeriss<ref name=":5" />
|MIT
|digital
|scalar MACs
|scratchpad
| -
|67.2 Gops (16-bit)
|-
|Prime<ref name=":7">{{Cite book|last1=Chi|first1=Ping|last2=Li|first2=Shuangchen|last3=Xu|first3=Cong|last4=Zhang|first4=Tao|last5=Zhao|first5=Jishen|last6=Liu|first6=Yongpan|last7=Wang|first7=Yu|last8=Xie|first8=Yuan|title=2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) |chapter=PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory |date=June 2016|pages=27–39|publisher=IEEE|doi=10.1109/isca.2016.13|isbn=978-1-4673-8947-1}}</ref>
|UCSB
|hybrid
|[[In-memory processing|Process-in-Memory]]
|ReRAM
| -
| -
|-
|Orlando<ref>{{Cite book |last1=Desoli |first1=Giuseppe |last2=Chawla |first2=Nitin |last3=Boesch |first3=Thomas |last4=Singh |first4=Surinder-pal |last5=Guidetti |first5=Elio |last6=De Ambroggi |first6=Fabio |last7=Majo |first7=Tommaso |last8=Zambotti |first8=Paolo |last9=Ayodhyawasi |first9=Manuj |last10=Singh |first10=Harvinder |last11=Aggarwal |first11=Nalin |chapter=14.1 a 2.9TOPS/W deep convolutional neural network SoC in FD-SOI 28nm for intelligent embedded systems |date=2017-02-05 |title=2017 IEEE International Solid-State Circuits Conference (ISSCC) |chapter-url=https://ieeexplore.ieee.org/document/7870349 |publisher=IEEE |pages=238–239 |doi=10.1109/ISSCC.2017.7870349 |isbn=978-1-5090-3758-2 |via=IEEEXplore}}</ref>
|STMicroelectronics
|digital
|Convolution accelerator + DSP
|scratchpad
|RISC
| 676 Gops (16 bits)
|-
| rowspan="4" |2017
|TPU<ref name=":0" />
|Google
|digital
|scalar MACs
|scratchpad
|[[complex instruction set computer|CISC]]
|92 Tops (8-bit)
|-
|PipeLayer<ref name=":8" />
|U of Pittsburgh
|hybrid
|Process-in-Memory
|ReRAM
| -
|
|-
|FlexFlow
|ICT, CAS
|digital
|scalar MACs
|scratchpad
| -
|420 Gops ()
|-
|DNPU<ref>{{Cite book |chapter-url=https://ieeexplore.ieee.org/document/7870350 |access-date=2023-08-24 |doi=10.1109/ISSCC.2017.7870350 |s2cid=206998709 |chapter=14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks |title=2017 IEEE International Solid-State Circuits Conference (ISSCC) |date=2017 |last1=Shin |first1=Dongjoo |last2=Lee |first2=Jinmook |last3=Lee |first3=Jinsu |last4=Yoo |first4=Hoi-Jun |pages=240–241 |isbn=978-1-5090-3758-2 }}</ref>
|KAIST
|digital
|scalar MACS
|scratchpad
| -
|300 Gops(16bit)
1200 Gops(4bit)
|-
| rowspan="3" |2018
|MAERI
|Georgia Tech
|digital
|scalar MACs
|scratchpad
| -
|
|-
|PermDNN
|City University of New York
|digital
|vector MACs
|scratchpad
| -
|614.4 Gops (16-bit)
|-
|UNPU<ref>{{Cite book |chapter-url=https://ieeexplore.ieee.org/document/8310262 |access-date=2023-11-30 |doi=10.1109/ISSCC.2018.8310262 |s2cid=3861747 |chapter=UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision |title=2018 IEEE International Solid - State Circuits Conference - (ISSCC) |date=2018 |last1=Lee |first1=Jinmook |last2=Kim |first2=Changhyeon |last3=Kang |first3=Sanghoon |last4=Shin |first4=Dongjoo |last5=Kim |first5=Sangyeob |last6=Yoo |first6=Hoi-Jun |pages=218–220 |isbn=978-1-5090-4940-0 }}</ref>
|KAIST
|digital
|scalar MACs
|scratchpad
| -
|345.6 Gops(16bit)
691.2 Gops(8b)
1382 Gops(4bit)
7372 Gops(1bit)
|-
| rowspan="2" |2019
|FPSA
|Tsinghua
|hybrid
|Process-in-Memory
|ReRAM
| -
|
|-
|Cambricon-F
|ICT, CAS
|digital
|vector MACs
|scratchpad
|FISA
|14.9 Tops (F1, 16-bit)
 
956 Tops (F100, 16-bit)
|}
 
=== Digital DLPs ===
 
The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.
 
Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the [[Multiply–accumulate operation|MAC]]-based (multiplier-accumulation) organization, either with vector MACs<ref name=":1" /><ref name=":2" /><ref name=":4" /> or scalar MACs.<ref name=":0" /><ref name=":3" /><ref name=":5" /> Rather than [[Single instruction, multiple data|SIMD]] or [[Single instruction, multiple threads|SIMT]] in general processing devices, deep learning ___domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024&nbsp;GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically.<ref name=":1" /> Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning ___domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon<ref>{{Cite book|last1=Liu|first1=Shaoli|last2=Du|first2=Zidong|last3=Tao|first3=Jinhua|last4=Han|first4=Dong|last5=Luo|first5=Tao|last6=Xie|first6=Yuan|last7=Chen|first7=Yunji|last8=Chen|first8=Tianshi|title=2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) |chapter=Cambricon: An Instruction Set Architecture for Neural Networks |date=June 2016|pages=393–405|publisher=IEEE|doi=10.1109/isca.2016.42|isbn=978-1-4673-8947-1}}</ref> introduces the first deep learning ___domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.
 
=== Hybrid DLPs ===
 
Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue.<ref name=":8">{{Cite book|last1=Song|first1=Linghao|last2=Qian|first2=Xuehai|last3=Li|first3=Hai|author3-link=Hai Li|last4=Chen|first4=Yiran|title=2017 IEEE International Symposium on High Performance Computer Architecture (HPCA) |chapter=PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning |date=February 2017|pages=541–552|publisher=IEEE|doi=10.1109/hpca.2017.55|isbn=978-1-5090-4985-1|s2cid=15281419}}</ref><ref name=":9">{{Cite journal|last1=Ambrogio|first1=Stefano|last2=Narayanan|first2=Pritish|last3=Tsai|first3=Hsinyu|last4=Shelby|first4=Robert M.|last5=Boybat|first5=Irem|last6=di Nolfo|first6=Carmelo|last7=Sidler|first7=Severin|last8=Giordano|first8=Massimo|last9=Bodini|first9=Martina|last10=Farinha|first10=Nathan C. P.|last11=Killeen|first11=Benjamin|date=June 2018|title=Equivalent-accuracy accelerated neural-network training using analogue memory|journal=Nature|volume=558|issue=7708|pages=60–67|doi=10.1038/s41586-018-0180-5|pmid=29875487|bibcode=2018Natur.558...60A |s2cid=46956938|issn=0028-0836}}</ref><ref>{{Cite book|last1=Chen|first1=Wei-Hao|last2=Lin|first2=Wen-Jang|last3=Lai|first3=Li-Ya|last4=Li|first4=Shuangchen|last5=Hsu|first5=Chien-Hua|last6=Lin|first6=Huan-Ting|last7=Lee|first7=Heng-Yuan|last8=Su|first8=Jian-Wei|last9=Xie|first9=Yuan|last10=Sheu|first10=Shyh-Shyuan|last11=Chang|first11=Meng-Fan|title=2017 IEEE International Electron Devices Meeting (IEDM) |chapter=A 16Mb dual-mode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme |date=December 2017|pages=28.2.1–28.2.4|publisher=IEEE|doi=10.1109/iedm.2017.8268468|isbn=978-1-5386-3559-9|s2cid=19556846}}</ref> Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing.<ref>{{Cite journal|last1=Yang|first1=J. Joshua|last2=Strukov|first2=Dmitri B.|last3=Stewart|first3=Duncan R.|date=January 2013|title=Memristive devices for computing|url=https://www.nature.com/articles/nnano.2012.240|journal=Nature Nanotechnology|language=en|volume=8|issue=1|pages=13–24|doi=10.1038/nnano.2012.240|pmid=23269430|bibcode=2013NatNa...8...13Y |issn=1748-3395}}</ref> Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM,<ref name=":7" /><ref>{{Cite journal|last1=Shafiee|first1=Ali|last2=Nag|first2=Anirban|last3=Muralimanohar|first3=Naveen|last4=Balasubramonian|first4=Rajeev|last5=Strachan|first5=John Paul|last6=Hu|first6=Miao|last7=Williams|first7=R. Stanley|last8=Srikumar|first8=Vivek|date=2016-10-12|title=ISAAC|journal=ACM SIGARCH Computer Architecture News|volume=44|issue=3|pages=14–26|doi=10.1145/3007787.3001139|s2cid=6329628|issn=0163-5964}}</ref><ref>{{Cite book|last=Ji, Yu Zhang, Youyang Xie, Xinfeng Li, Shuangchen Wang, Peiqi Hu, Xing Zhang, Youhui Xie, Yuan|title=FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture|date=2019-01-27|oclc=1106329050}}</ref><ref name=":8" /> phase change memory,<ref name=":9" /><ref>{{Cite book|last1=Nandakumar|first1=S. R.|last2=Boybat|first2=Irem|last3=Joshi|first3=Vinay|last4=Piveteau|first4=Christophe|last5=Le Gallo|first5=Manuel|last6=Rajendran|first6=Bipin|last7=Sebastian|first7=Abu|last8=Eleftheriou|first8=Evangelos|title=2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS) |chapter=Phase-Change Memory Models for Deep Learning Training and Inference |date=November 2019|pages=727–730|publisher=IEEE|doi=10.1109/icecs46596.2019.8964852|isbn=978-1-7281-0996-1|s2cid=210930121}}</ref><ref>{{Cite journal|last1=Joshi|first1=Vinay|last2=Le Gallo|first2=Manuel|last3=Haefeli|first3=Simon|last4=Boybat|first4=Irem|last5=Nandakumar|first5=S. R.|last6=Piveteau|first6=Christophe|last7=Dazzi|first7=Martino|last8=Rajendran|first8=Bipin|last9=Sebastian|first9=Abu|last10=Eleftheriou|first10=Evangelos|date=2020-05-18|title=Accurate deep neural network inference using computational phase-change memory|journal=Nature Communications|volume=11|issue=1|page=2473|doi=10.1038/s41467-020-16108-9|arxiv=1906.03138|pmid=32424184|pmc=7235046|bibcode=2020NatCo..11.2473J |issn=2041-1723|doi-access=free}}</ref> etc.
 
== Benchmarks ==