Poster Session 3 · Thursday, December 4, 2025 11:00 AM → 2:00 PM
#5416
Non-Asymptotic Guarantees for Average-Reward Q-Learning with Adaptive Stepsizes
Abstract
This work presents the first finite-time analysis of average-reward -learning with an asynchronous implementation. A key feature of the algorithm we study is the use of adaptive stepsizes that act as local clocks for each state-action pair. We show that the mean-square error of this -learning algorithm, measured in the span seminorm, converges at a rate of .
To establish this result, we demonstrate that adaptive stepsizes are necessary: without them, the algorithm fails to converge to the correct target. Moreover, adaptive stepsizes can be viewed as a form of implicit importance sampling that counteracts the effect of asynchronous updates.
Technically, the use of adaptive stepsizes causes each -learning update to depend on the full sample history, introducing strong correlations and making the algorithm a non-Markovian stochastic approximation (SA) scheme. Our approach to overcoming this challenge involves:
- a time-inhomogeneous Markovian reformulation of non-Markovian SA, and
- a combination of almost-sure time-varying bounds, conditioning arguments, and Markov chain concentration inequalities to break the strong correlations between the adaptive stepsizes and the iterates.