Jump to content

Neural Network Compression: Difference between revisions

Line 8: Line 8:
* Mozer and Smolensky (1988)<ref name="mozer1988skeletonization"></ref> use a gate for each neuron. Then the sensitivity and be estimated with the derivative w.r.t the gate.
* Mozer and Smolensky (1988)<ref name="mozer1988skeletonization"></ref> use a gate for each neuron. Then the sensitivity and be estimated with the derivative w.r.t the gate.
* Karnin <ref name="karnin1990simple"></ref> estimates the sensitivity by monitoring the change in weight during training.
* Karnin <ref name="karnin1990simple"></ref> estimates the sensitivity by monitoring the change in weight during training.
* LeCun ''e al.'' present ''Optimal Brain Damage'' <ref name="lecun1989optimal"></ref>
==Quantization==
There are many works which use 8-bit or 16-bit representations instead of the standard 32-bit floats.


==Factorization==
==Factorization==