1 paper across 1 session
We propose DCoLT, a method to enhance diffusion language models by treating each reverse diffusion step as a latent "thinking" using reinforcement learning. Achieves promising results on several math and code metrics with SEDD and LLaDA.