PhD student, Wuhan University
3 papers at NeurIPS 2025
This paper proposes a data-free dual orthogonal projection framework to perform continual model merging.
We propose a training-free projection-based continual merging method that efficiently combines models incrementally.
This study presents the first comprehensive investigation into model merging and data mixture strategies for constructing large language models (LLMs) aligned with the 3H principles (Harmlessness, Helpfulness, Honesty).