Postdoc, ETHZ - ETH Zurich
3 papers at NeurIPS 2025
We propose PoLAR, a polar-decomposition-based parameterization, for efficient fine-tuning of LLMs. PoLAR mitigates the low stable rank seen in LoRA, provably accelerates convergence on a canonical LoRA problem, and lifts accuracy on real-world tasks.
RefLoRA improves upon LoRA by dynamically selecting the optimal low-rank factorization per step. This leads to faster and stabler convergence, as well as superior performance on various NLP tasks, while incurring negligible computational overhead.