Content deleted Content added
Rephrasing, add See Also Tags: Mobile edit Mobile web edit |
m v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation) |
||
Line 1:
{{Short description|Trimming artificial neural networks to reduce computational overhead}}
{{other uses|Pruning (disambiguation)}}
In [[deep learning]], '''pruning''' is the practice of removing [[parameter]]s from an existing [[Neural network (machine learning)|artificial neural network]].<ref>{{cite arXiv|last1=Blalock|first1=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|class=cs.LG|eprint=2003.03033}}</ref> The goal of this process is to reduce the size (parameter count) of the neural network (and therefore the [[computational resource]]s required to run it) whilst maintaining accuracy. This can be compared to the biological process of [[synaptic pruning]] which takes place in [[Mammal|mammalian]] brains during development.<ref>{{Cite journal |last1=Chechik |first1=Gal |last2=Meilijson |first2=Isaac |last3=Ruppin |first3=Eytan |date=October 1998 |title=Synaptic Pruning in Development: A Computational Account |url=https://ieeexplore.ieee.org/document/6790725 |journal=Neural Computation |volume=10 |issue=7 |pages=1759–1777 |doi=10.1162/089976698300017124 |pmid=9744896 |s2cid=14629275 |issn=0899-7667}}</ref>
== Node (neuron) pruning ==
|