Generative feature-driven image replay for continual learning
METADATA ONLY
Loading...
Author / Producer
Date
2024-10
Publication Type
Journal Article
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always possible, e.g., due to data protection regulations. In such restricted scenarios, one can employ generative models to replay either artificial images or hidden features to a classifier. In this work, we propose Genifer (GENeratIve FEature-driven image Replay), where a generative model is trained to replay images that must induce the same hidden features as real samples when they are passed through the classifier. Our technique therefore incorporates the benefits of both image and feature replay, i.e.: (1) unlike conventional image replay, our generative model explicitly learns the distribution of features that are relevant for classification; (2) in contrast to feature replay, our entire classifier remains trainable; and (3) we can leverage image-space augmentations, which increase distillation performance while also mitigating overfitting during the training of the generative model. We show that Genifer substantially outperforms the previous state of the art for various settings on the CIFAR-100 and CUB-200 datasets. The code is available at:https://github.com/kevthan/feature-driven-image-replay.
Permanent link
Publication status
published
External links
Editor
Book title
Journal / series
Volume
150
Pages / Article No.
105187
Publisher
Elsevier
Event
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Class-incremental learning; Generative replay; Catastrophic forgetting