Parameter Competition Balancing for Model Merging
arXiv preprint arXiv:2410.02396, 2024•arxiv.org
While fine-tuning pretrained models has become common practice, these models often
underperform outside their specific domains. Recently developed model merging
techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks,
into a single model. This strategy promotes multitasking capabilities without requiring
retraining on the original datasets. However, existing methods fall short in addressing
potential conflicts and complex correlations between tasks, especially in parameter-level …
underperform outside their specific domains. Recently developed model merging
techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks,
into a single model. This strategy promotes multitasking capabilities without requiring
retraining on the original datasets. However, existing methods fall short in addressing
potential conflicts and complex correlations between tasks, especially in parameter-level …
While fine-tuning pretrained models has become common practice, these models often underperform outside their specific domains. Recently developed model merging techniques enable the direct integration of multiple models, each fine-tuned for distinct tasks, into a single model. This strategy promotes multitasking capabilities without requiring retraining on the original datasets. However, existing methods fall short in addressing potential conflicts and complex correlations between tasks, especially in parameter-level adjustments, posing a challenge in effectively balancing parameter competition across various tasks. This paper introduces an innovative technique named PCB-Merging (Parameter Competition Balancing), a lightweight and training-free technique that adjusts the coefficients of each parameter for effective model merging. PCB-Merging employs intra-balancing to gauge parameter significance within individual tasks and inter-balancing to assess parameter similarities across different tasks. Parameters with low importance scores are dropped, and the remaining ones are rescaled to form the final merged model. We assessed our approach in diverse merging scenarios, including cross-task, cross-domain, and cross-training configurations, as well as out-of-domain generalization. The experimental results reveal that our approach achieves substantial performance enhancements across multiple modalities, domains, model sizes, number of tasks, fine-tuning forms, and large language models, outperforming existing model merging methods. The code is publicly available at: \url{https://github.com/duguodong7/pcb-merging}.
arxiv.org