PhD student, Johannes Kepler Universität Linz
1 paper at NeurIPS 2025
We propose EVA, a parameter-efficient fine-tuning method that initalizes LoRA weights in a variance-optimal manner and performs adaptive rank allocation to provably maximize the expected gradient signal at the beginning of fine-tuning.