Associate Professor, Stanford University
2 papers at NeurIPS 2025
We propose an informed corrector for masked discrete diffusion that reduces approximation errors, enabling faster sampling and better sample quality in both synthetic and large-scale settings.
We show that limiting a model's confidence during training can improve test-time scaling in mathematical reasoning.