Pruning (artificial neural network): Difference between revisions

Content deleted Content added
Adding short description: "Trimming of neural networks to reduce computational overhead"
 
(6 intermediate revisions by 6 users not shown)
Line 1:
{{Short description|Trimming ofartificial neural networks to reduce computational overhead}}
{{other uses|Pruning (disambiguation)}}
In the context of [[artificialdeep neural networklearning]], '''pruning''' is the practice of removing [[parameter]]s (whichfrom mayan entail removing individual parameters, or parameters in groups such as byexisting [[artificialNeural neurons|neurons]])network from(machine anlearning)|artificial existingneural network]].<ref>{{cite arXiv|last1=Blalock|first1=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|class=cs.LG|eprint=2003.03033}}</ref> The goal of this process is to maintain accuracy ofreduce the networksize while(parameter increasingcount) itsof [[efficiency]].the Thisneural cannetwork be(and done to reducetherefore the [[computational resource]]s required to run theit) neuralwhilst networkmaintaining accuracy. AThis can be compared to the biological process of [[synaptic pruning]] which takes place in the[[Mammal|mammalian]] brain of mammalsbrains during development.<ref>{{Cite journal |last1=Chechik |first1=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=October 1998 |title=Synaptic Pruning in Development: A Computational Account |url=https://ieeexplore.ieee.org/document/6790725 |journal=Neural Computation |volume=10 |issue=7 |pages=1759–1777 |doi=10.1162/089976698300017124 |pmid=9744896 |s2cid=14629275 |issn=0899-7667|url-access=subscription }}</ref> (see also [[Neural Darwinism]]).
 
== Node (neuron) pruning ==
Line 10 ⟶ 11:
 
== Edge (weight) pruning ==
Most work on neural network pruning focuses on removing weights, namely, setting their values to zero. Early work suggested to also change the values of non-pruned weights.<ref>{{Cite journal |last1=Chechik |first1=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=April 2001 |title=Effective Neuronal Learning with Ineffective Hebbian Learning Rules |url=https://ieeexplore.ieee.org/document/6789989 |journal=Neural Computation |volume=13 |issue=4 |pages=817–840 |doi=10.1162/089976601300014367 |pmid=11255571 |s2cid=133186 |issn=0899-7667|url-access=subscription }}</ref>
Most work on neural network pruning focuses on removing weights, namely, setting their values to zero.
 
Early work suggested to also change the values of non-pruned weights.<ref>{{Cite journal |last1=Chechik |first1=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=April 2001 |title=Effective Neuronal Learning with Ineffective Hebbian Learning Rules |url=https://ieeexplore.ieee.org/document/6789989 |journal=Neural Computation |volume=13 |issue=4 |pages=817–840 |doi=10.1162/089976601300014367 |pmid=11255571 |s2cid=133186 |issn=0899-7667}}</ref>
== See also ==
* [[Knowledge distillation]]
* [[Neural Darwinism]]
 
== References ==
Line 19 ⟶ 23:
 
 
{{Compudeep-ailearning-stub}}