Local vs Global continual learning
OPEN ACCESS
Loading...
Author / Producer
Date
2025
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
Continual learning is the problem of integrating new information in a model while retaining the knowledge acquired in the past. Despite the tangible improvements achieved in recent years, the problem of continual learning is still an open one. A better understanding of the mechanisms be hind the successes and failures of existing continual learning algorithms can unlock the development of new successful strategies. In this work, we view continual learning from the perspective of the multi-task loss approximation, and we compare two alternative strategies, namely local and global approximations. We classify existing continual learning algorithms based on the approximation used, and we assess the practical effects of this distinction in common continual learning settings. Additionally, we study optimal continual learning objectives in the case of local polynomial approx imations and we provide examples of existing algorithms implementing the optimal objectiv.
Permanent link
Publication status
published
Book title
Proceedings of the 3rd Conference on Lifelong Learning Agents
Journal / series
Volume
274
Pages / Article No.
121 - 143
Publisher
PMLR
Event
3rd Conference on Lifelong Learning Agents (CoLLAs 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
09462 - Hofmann, Thomas / Hofmann, Thomas
02219 - ETH AI Center / ETH AI Center
Notes
Funding
Related publications and datasets
Is new version of: https://arxiv.org/abs/2407.16611