Exploring the Limits of Feature Learning in Continual Learning


METADATA ONLY
Loading...

Date

2024-12-14

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Despite the recent breakthroughs in deep learning, neural networks still struggle to learn continually in non-stationary environments, and the reasons are poorly understood. In this work, we perform an empirical study on the role of feature learning and scale on catastrophic forgetting by applying the precepts of the theory on neural networks scaling limits. We interpolate between lazy and rich training regimes, finding that the optimal amount of feature learning is modulated by task similarity. Surprisingly, our results consistently show that more feature learning increases catastrophic forgetting and that scale only helps when yielding more laziness. Supported by empirical evidence on a variety of benchmarks, our work provides the first unified understanding of the role of scale in the different training regimes and parameterizations for continual learning.

Publication status

published

Editor

Book title

NeurIPS 2024 Workshop on Scalable Continual Learning for Lifelong Foundation Models

Journal / series

Volume

Pages / Article No.

Publisher

OpenReview

Event

Scalable Continual Learning for Lifelong Foundation Models Workshop @ NeurIPS 2024

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09462 - Hofmann, Thomas / Hofmann, Thomas check_circle

Notes

Poster presented on December 14, 2024.

Funding

Related publications and datasets