SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training

Authors: Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, Yunchao Wei

Abstract: In recent years, continual learning with pre-training (CLPT) has received
widespread interest, instead of its traditional focus of training from scratch.
The use of strong pre-trained models (PTMs) can greatly facilitate knowledge
transfer and alleviate catastrophic forgetting, but also suffers from
progressive overfitting of pre-trained knowledge into specific downstream
tasks. A majority of current efforts often keep the PTMs frozen and incorporate
task-specific prompts to instruct representation learning, coupled with a
prompt selection process for inference. However, due to the limited capacity of
prompt parameters, this strategy demonstrates only sub-optimal performance in
continual learning. In comparison, tuning all parameters of PTMs often provides
the greatest potential for representation learning, making sequential
fine-tuning (Seq FT) a fundamental baseline that has been overlooked in CLPT.
To this end, we present an in-depth analysis of the progressive overfitting
problem from the lens of Seq FT. Considering that the overly fast
representation learning and the biased classification layer constitute this
particular problem, we introduce the advanced Slow Learner with Classifier
Alignment (SLCA++) framework to unleash the power of Seq FT, serving as a
strong baseline approach for CLPT. Our approach involves a Slow Learner to
selectively reduce the learning rate of backbone parameters, and a Classifier
Alignment to align the disjoint classification layers in a post-hoc fashion. We
further enhance the efficacy of SL with a symmetric cross-entropy loss, as well
as employ a parameter-efficient strategy to implement Seq FT with SLCA++.
Across a variety of continual learning scenarios on image classification
benchmarks, our approach provides substantial improvements and outperforms
state-of-the-art methods by a large margin. Code:
https://github.com/GengDavid/SLCA.

Source: http://arxiv.org/abs/2408.08295v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these