Postdoc, Massachusetts Institute of Technology
4 papers at NeurIPS 2025
We rethink LLMs from the perspective of recommender systems and propose Language System, which exploits the model’s output distribution more efficiently and effectively.
We propose Hierarchical Diffusion Language Models, which is a discrete diffusion with a general time-varying next semantic scale prediction process for language modeling.
We understand oversmoothing through the lens of signed graphs and propose a plug-and-play method to address it.