Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities

Authors: Enneng Yang, Li Shen, Guibing Guo, Xingwei Wang, Xiaochun Cao, Jie Zhang, Dacheng Tao

Abstract: Model merging is an efficient empowerment technique in the machine learning
community that does not require the collection of raw training data and does
not require expensive computation. As model merging becomes increasingly
prevalent across various fields, it is crucial to understand the available
model merging techniques comprehensively. However, there is a significant gap
in the literature regarding a systematic and thorough review of these
techniques. This survey provides a comprehensive overview of model merging
methods and theories, their applications in various domains and settings, and
future research directions. Specifically, we first propose a new taxonomic
approach that exhaustively discusses existing model merging methods. Secondly,
we discuss the application of model merging techniques in large language
models, multimodal large language models, and 10+ machine learning subfields,
including continual learning, multi-task learning, few-shot learning, etc.
Finally, we highlight the remaining challenges of model merging and discuss
future research directions. A comprehensive list of papers about model merging
is available at
\url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.

Source: http://arxiv.org/abs/2408.07666v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these