Undergrad student, Purdue University
1 paper at NeurIPS 2025
Soft Thinking enables large language models to reason more accurately and efficiently by using probability-weighted concept tokens in a continuous space, rather than committing to discrete tokens at each step.