Assistant Professor, Massachusetts Institute of Technology
3 papers at NeurIPS 2025
We provide a method for reliable uncertainty quantification for spatial associations in the face of model misspecification and nonrandom spatial locations.
We propose Hierarchical Diffusion Language Models, which is a discrete diffusion with a general time-varying next semantic scale prediction process for language modeling.
We introduce a systematic approach to flexibly incorporating representation guidance into diffusion models, resulting in both accelerated training and better performance across image, protein, and molecule generation tasks.