PhD student, Rice University
1 paper at NeurIPS 2025
We introduce a new method for selecting subspaces in low-rank optimization for memory-efficient pretraining of large language models (LLMs).