Pruning (artificial neural network): Difference between revisions

Content deleted Content added
Clarified the separation between node and weight pruning. Added references.
BattyBot (talk | contribs)
Fixed reference date error(s) (see CS1 errors: dates for details) and AWB general fixes
Line 1:
{{Orphan|date=June 2020}}
 
In the context of [[artificial neural network]], '''pruning''' is the practice of removing [[Parameter|parametersparameter]]s (which may entail removing individual parameters, or parameters in groups such as by [[artificial neurons|neurons]]) from an existing network.<ref>{{cite arXiv|last1=Blalock|first1=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|class=cs.LG|eprint=2003.03033}}</ref> The goal of this process is to maintain accuracy of the network while increasing its [[efficiency]]. This can be done to reduce the [[Computationalcomputational resource|computational resources]]s required to run the neural network. A process of pruning takes place in the brain of mammals during development .<ref>{{Cite journal |last=Chechik |first=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=October 1998-10 |title=Synaptic Pruning in Development: A Computational Account |url=https://ieeexplore.ieee.org/abstract/document/6790725 |journal=Neural Computation |volume=10 |issue=7 |pages=1759–1777 |doi=10.1162/089976698300017124 |issn=0899-7667}}</ref>.
 
In the context of [[artificial neural network]], '''pruning''' is the practice of removing [[Parameter|parameters]] (which may entail removing individual parameters, or parameters in groups such as by [[artificial neurons|neurons]]) from an existing network.<ref>{{cite arXiv|last1=Blalock|first1=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|class=cs.LG|eprint=2003.03033}}</ref> The goal of this process is to maintain accuracy of the network while increasing its [[efficiency]]. This can be done to reduce the [[Computational resource|computational resources]] required to run the neural network. A process of pruning takes place in the brain of mammals during development <ref>{{Cite journal |last=Chechik |first=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=1998-10 |title=Synaptic Pruning in Development: A Computational Account |url=https://ieeexplore.ieee.org/abstract/document/6790725 |journal=Neural Computation |volume=10 |issue=7 |pages=1759–1777 |doi=10.1162/089976698300017124 |issn=0899-7667}}</ref>.
 
== Node (neuron) pruning ==
Line 14 ⟶ 13:
== Edge (weight) pruning ==
Most work on neural network pruning focuses on removing weights, namely, setting their values to zero.
Early work suggested to also change the values of non-pruned weights .<ref>{{Cite journal |last=Chechik |first=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=2001-04 |title=Effective Neuronal Learning with Ineffective Hebbian Learning Rules |url=https://ieeexplore.ieee.org/abstract/document/6789989 |journal=Neural Computation |volume=13 |issue=4 |pages=817–840 |doi=10.1162/089976601300014367 |issn=0899-7667}}</ref>.
 
== References ==
{{reflist}}