Pruning (artificial neural network): Difference between revisions

Content deleted Content added
m top: Various citation & identifier cleanup, plus AWB genfixes (arxiv version pointless when published)
Citation bot (talk | contribs)
Add: eprint, class, authors 1-1. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Pages linked from cached Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | via #UCB_webform_linked 23/26
Line 4:
}}
 
In the context of [[artificial neural network]], '''pruning''' is the practice of removing parameters (which may entail removing individual parameters, or parameters in groups such as by [[artificial neurons|neurons]]) from an existing network.<ref>{{cite arxiv|lastlast1=Blalock|firstfirst1=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|arxivclass=cs.LG|eprint=2003.03033}}</ref> The goal of this process is to maintain accuracy of the network while increasing its efficiency. This can be done to reduce the computational resources required to run the neural network.
 
A basic algorithm for pruning is as follows:<ref>Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). ''Pruning convolutional neural networks for resource efficient inference''. arXiv preprint arXiv:1611.06440.</ref><ref>[https://jacobgil.github.io/deeplearning/pruning-deep-learning Pruning deep neural networks to make them fast and small].</ref>