Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks


METADATA ONLY
Loading...

Date

2022

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Deep Neural Networks, particularly Convolutional Neural Networks (ConvNets), have achieved incredible success in many vision tasks, but they usually require millions of parameters for good accuracy performance. With increasing applications that use ConvNets, updating hundreds of networks for multiple tasks on an embedded device can be costly in terms of memory, bandwidth, and energy. Approaches to reduce this cost include model compression and parameter-efficient models that adapt a subset of network layers for each new task. This work proposes a novel parameter-efficient kernel modulation (KM) method that adapts all parameters of a base network instead of a subset of layers. KM uses lightweight task-specialized kernel modulators that require only an additional 1.4% of the base network parameters. With multiple tasks, only the task-specialized KM weights are communicated and stored on the end-user device. We applied this method in training ConvNets for Transfer Learning and Meta-Learning scenarios. Our results show that KM delivers up to 9% higher accuracy compared to other parameter-efficient methods on the Transfer Learning benchmark.

Publication status

published

Editor

Book title

2022 26th International Conference on Pattern Recognition (ICPR)

Journal / series

Volume

Pages / Article No.

2192 - 2198

Publisher

IEEE

Event

26th International Conference on Pattern Recognition (ICPR 2022)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Training; Performance evaluation; Adaptation models; Transfer learning; Neural networks; Modulation; Pattern recognition

Organisational unit

02533 - Institut für Neuroinformatik / Institute of Neuroinformatics

Notes

Funding

Related publications and datasets