5,336
edits
No edit summary |
|||
Line 24: | Line 24: | ||
===Structured Pruning=== | ===Structured Pruning=== | ||
Structured pruning focuses on keeping the dense structure of the network such that the pruned | Structured pruning focuses on keeping the dense structure of the network such that the pruned network can benefit using standard dense matrix multiplication operations.<br> | ||
This is in contrast to unstructured pruning which zeros out values in the weight matrix but may not necessarilly run faster. | |||
* Wen ''et al.'' (2016) <ref name="wen2016learning"></ref> propose Structured Sparsity Learning (SSL) on CNNs. Given filters of size (N, C, M, K), i.e. (out-channels, in-channels, height, width), they use a group lasso loss/regularization to penalize usage of extra input and output channels. They also learn filter shapes using this regularization. | * Wen ''et al.'' (2016) <ref name="wen2016learning"></ref> propose Structured Sparsity Learning (SSL) on CNNs. Given filters of size (N, C, M, K), i.e. (out-channels, in-channels, height, width), they use a group lasso loss/regularization to penalize usage of extra input and output channels. They also learn filter shapes using this regularization. |