logo
today local_bar
Poster Session 5 · Friday, December 5, 2025 11:00 AM → 2:00 PM
#406

Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning

NeurIPS OpenReview

Abstract

Motivated by real-world settings where data collection and policy deployment—whether for a single agent or across multiple agents—are costly, we study the problem of on-policy single-agent reinforcement learning (RL) and federated RL (FRL) with a focus on minimizing burn-in costs (the sample sizes needed to reach near-optimal regret) and policy switching or communication costs.
In parallel finite-horizon episodic Markov Decision Processes (MDPs) with states and actions, existing methods either require superlinear burn-in costs in and or fail to achieve logarithmic switching or communication costs.
We propose two novel model-free RL algorithms—Q-EarlySettled-LowCost and FedQ-EarlySettled-LowCost—that are the first in the literature to simultaneously achieve:
  1. the best near-optimal regret among all known model-free RL or FRL algorithms,
  2. low burn-in cost that scales linearly with and , and
  3. logarithmic policy switching cost for single-agent RL or communication cost for FRL.
Additionally, we establish gap-dependent theoretical guarantees for both regret and switching/communication costs, improving or matching the best-known gap-dependent bounds.