1 paper across 1 session
FlashMoE is the first work to completely fuse the Distributed Mixture-of-Experts operator into a single persistent GPU kernel with elegant actor-style concurrency and high-throughput tile parallelism, achieving up to 9× higher GPU utilization.