PhD student, University of Illinois at Urbana Champaign
1 paper at NeurIPS 2025
We provide approximation bounds for an approach that solves a Partially Observable Reinforcement Learning (PORL) problem by approximating the corresponding POMDP model into a finite-state Markov Decision Process (MDP)