Undergrad student, University of Electronic Science and Technology of China
3 papers at NeurIPS 2025
Soft Thinking enables large language models to reason more accurately and efficiently by using probability-weighted concept tokens in a continuous space, rather than committing to discrete tokens at each step.
We only need one example for RLVR on LLMs to achieve significant improvement on math tasks