Neural Network Compression
Brief survey on neural network compression techniques.
Pruning
Sensitivity Methods
The idea here is to measure how sensitive each neuron is.
I.e., if you remove the neuron, how will it change the output?
- Mozer and Smolensky (1988)[1] use a gate for each neuron. Then the sensitivity and be estimated with the derivative w.r.t the gate.
- Karnin [2] estimates the sensitivity by monitoring the change in weight during training.
Factorization
Libraries
Both Tensorflow and PyTorch have built in libraries for pruning:
Resources
Surveys
- Pruning algorithms a survey (1993) by Russel Reed
- A Survey of Model Compression and Acceleration for Deep Neural Networks (2017) by Cheng et al.
References
<templatestyles src="Reflist/styles.css" />
- ↑ Mozer, M. C., & Smolensky, P. (1988). Skeletonization: A technique for trimming the fat from a network via relevance assessment. (NeurIPS 1988). PDF
- ↑ Karnin, E. D. (1990). A simple procedure for pruning back-propagation trained neural networks. (IEEE TNNLS 1990). IEEE Xplore