PhD student, University of California, Santa Barbara
2 papers at NeurIPS 2025
Soft Thinking enables large language models to reason more accurately and efficiently by using probability-weighted concept tokens in a continuous space, rather than committing to discrete tokens at each step.