Assistant Professor, University of Washington
1 paper at NeurIPS 2025
Meta-RL with self-supervised predictive coding modules can learn interpretable, task-relevant representations that better approximate Bayes-optimal belief states than black-box meta-RL models across diverse partially observable environments.