PhD student, Massachusetts Institute of Technology
2 papers at NeurIPS 2025
We provide a set of findings on how to adapt large language models efficiently without fine-tuning.