Assistant Professor, Cornell University
1 paper at NeurIPS 2025
FlashMoE is the first work to completely fuse the Distributed Mixture-of-Experts operator into a single persistent GPU kernel with elegant actor-style concurrency and high-throughput tile parallelism, achieving up to 9× higher GPU utilization.