Documents
Presentation Slides
Learning a Low-Rank Feature Representation: Achieving Better Trade-Off Between Stability and Plasticity in Continual Learning
- DOI:
- 10.60864/vr41-6r10
- Citation Author(s):
- Submitted by:
- Zhenrong Liu
- Last updated:
- 6 June 2024 - 10:27am
- Document Type:
- Presentation Slides
- Document Year:
- 2024
- Event:
- Presenters:
- Yang Li
- Paper Code:
- MLSP-L1.1
- Categories:
- Log in to post comments
In continual learning, networks confront a trade-off between stability and plasticity when trained on a sequence of tasks. To bolster plasticity without sacrificing stability, we propose a novel training algorithm called LRFR. This approach optimizes network parameters in the null space of the past tasks’ feature representation matrix to guarantee the stability. Concurrently, we judiciously select only a subset of neurons in each layer of the network while training individual tasks to learn the past tasks’ feature representation matrix in low-rank. This increases the null space dimension when designing network parameters for subsequent tasks, thereby enhancing the plasticity. Using CIFAR-100 and TinyImageNet as benchmark datasets for continual learning, the proposed approach consistently outperforms state-of-the-art methods.