Proactive Gradient Conflict Mitigation in Multi-Task Learning: A Sparse Training Perspective

Authors: Zhi Zhang, Jiayi Shen, Congfeng Cao, Gaole Dai, Shiji Zhou, Qizhe Zhang, Shanghang Zhang, Ekaterina Shutova

Abstract: Advancing towards generalist agents necessitates the concurrent processing of
multiple tasks using a unified model, thereby underscoring the growing
significance of simultaneous model training on multiple downstream tasks. A
common issue in multi-task learning is the occurrence of gradient conflict,
which leads to potential competition among different tasks during joint
training. This competition often results in improvements in one task at the
expense of deterioration in another. Although several optimization methods have
been developed to address this issue by manipulating task gradients for better
task balancing, they cannot decrease the incidence of gradient conflict. In
this paper, we systematically investigate the occurrence of gradient conflict
across different methods and propose a strategy to reduce such conflicts
through sparse training (ST), wherein only a portion of the model’s parameters
are updated during training while keeping the rest unchanged. Our extensive
experiments demonstrate that ST effectively mitigates conflicting gradients and
leads to superior performance. Furthermore, ST can be easily integrated with
gradient manipulation techniques, thus enhancing their effectiveness.

Source: http://arxiv.org/abs/2411.18615v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these