Assistant Professor, University of Virginia
3 papers at NeurIPS 2025
We make Chain-of-Thought reasoning in large language models (1) more efficient by creating implicit reasoning with lightweight language models; (2) still effective as the implicit reasoning maintains semantic alignment with ground-truth reasoning.
We propose an information-theoretical metric that helps determine the optimal order of demonstrations for in-context learning in large language models.
A graph topology-oriented prompting framework