PhD student, University of California, San Diego
1 paper at NeurIPS 2025
We contribute provable guarantees that regularized policy gradient methods converge in approximate Nash equilibria in imperfect-information extensive-form zero-sum games.