PhD student, Shanghai Jiaotong University
1 paper at NeurIPS 2025
We propose a progressive consistency distillation framework that enhances the efficiency of MLLMs by significantly reducing computational cost while preserving strong performance.