PhD student, McGill University
2 papers at NeurIPS 2025
Under input uncertainty, transformer models exhibit a systematic exploration of input‑agnostic conceptual representations, increasing the likelihood of hallucinations.
VLMs often perform worse at recalling facts than their LLM backbones because visual representations are formed too late in the forward pass to trigger the LLMs factual recall circuit.