PhD student, Peking University
1 paper at NeurIPS 2025
We present a novel approach that addresses the low-rank bottleneck in LoRA by integrating nonlinear mappings with compressed rank, achieving an optimal balance between parameter efficiency and model performance.