Assistant Professor, Columbia University
4 papers at NeurIPS 2025
We isolate “reasoning embeddings” from LLMs using a residual method and show they uniquely predict brain activity, revealing distinct neural correlates of reasoning beyond shallow linguistic features.
We fit scaling laws for large language models with varying width-to-depth ratios and parameter counts.
We jointly evaluate VLMs and diffusion models by seeing whether VLMs can evaluate diffusion model failure modes.