PhD student, Massachusetts Institute of Technology
1 paper at NeurIPS 2025
We provide a set of findings on how to adapt large language models efficiently without fine-tuning.