Associate Professor, University of California, Los Angeles
4 papers at NeurIPS 2025
We provide a computational efficient algorithm to achieve $O(H)$ deployment cost with polynomial sample complexity.
We introduce a new method for selecting subspaces in low-rank optimization for memory-efficient pretraining of large language models (LLMs).