1 paper across 1 session
We propose a progressive consistency distillation framework that enhances the efficiency of MLLMs by significantly reducing computational cost while preserving strong performance.