Researcher, Meta Fundamental AI Research (FAIR)
2 papers at NeurIPS 2025
We propose an adaptive multi-token unmasking sampler for masked language diffusion models, gaining speed up of 2-3x on code generation and math reasoning benchmarks without loss in accuracy.
A discrete flow model with native variable-length generation capabilities using edit operations and relying only on relative token positioning.