Dirichlet Pruning for Neural Network Compression


METADATA ONLY
Loading...

Date

2021

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning which assigns the Dirichlet distribution over each layer's channels in convolutional layers (or neurons in fully-connected layers), and estimates the parameters of the distribution over these units using variational inference. The learned distribution allows us to remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. The number of newly introduced Dirichlet parameters is only linear in the number of channels, which allows for rapid training, requiring as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and ResNet (94% and 72% compression rate, respectively) where our method achieves the stateof-the-art compression performance and provides interpretable features as a by-product.

Publication status

published

Book title

Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021)

Volume

130

Pages / Article No.

3637 - 3645

Publisher

PMLR

Event

24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

Notes

Funding

Related publications and datasets