PhD student, City University of Hong Kong
1 paper at NeurIPS 2025
We propose DEAL, a continual low-rank fine-tuning framework that enables efficient and privacy-preserving adaptation of large language models.