Researcher, Meta AI
1 paper at NeurIPS 2025
We propose a PEFT architecture that performs structural mixture of LoRA experts, boosting the expressive power of traditional MoE with negligible parameter and computation overhead.