Content deleted Content added
m AWB cleanup patrol, added underlinked tag |
Modified lead to reference parameters and accuracy, note that citation added is a meta-analysis of pruning methods |
||
Line 4:
}}
In the context of [[artificial neural network]], '''pruning''' is the practice of removing parameters (which may entail removing individual parameters, or parameters in groups such as by [[artificial neurons|neurons]]) from an existing network.<ref>{{Cite journal|last=Blalock|first=Davis|last2=Ortiz|first2=Jose Javier Gonzalez|last3=Frankle|first3=Jonathan|last4=Guttag|first4=John|date=2020-03-06|title=What is the State of Neural Network Pruning?|url=http://arxiv.org/abs/2003.03033|journal=arXiv:2003.03033 [cs, stat]}}</ref> The goal of this process is to maintain accuracy of the network while increasing its efficiency. This can be done to reduce the computational resources required to run the neural network.
In the context of [[artificial neural network]], '''pruning''' is the practice of removing [[artificial neurons]] after learning, usually with the goal of reducing the computational resources required to run the neural network. A basic algorithm for pruning is as follows:<ref>Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). ''Pruning convolutional neural networks for resource efficient inference''. arXiv preprint arXiv:1611.06440.</ref><ref>[https://jacobgil.github.io/deeplearning/pruning-deep-learning Pruning deep neural networks to make them fast and small].</ref>▼
▲
#Evaluate the importance of each neuron.
#Rank the neurons according to their importance (assuming there is a clearly defined measure for "importance").
|