logo
today local_bar
Poster Session 4 · Thursday, December 4, 2025 4:30 PM → 7:30 PM
#214

Asymmetric REINFORCE for off-Policy Reinforcement Learning: Balancing positive and negative rewards

NeurIPS OpenReview

Abstract

Reinforcement learning (RL) is increasingly used to align large language models (LLMs). Off-policy methods offer greater implementation simplicity and data efficiency than on-policy techniques, but often result in suboptimal performance.
In this work, we study the intermediate range of algorithms between off-policy RL and supervised fine-tuning by analyzing a simple off-policy REINFORCE algorithm, where the advantage is defined as , with a reward and some tunable baseline. Intuitively, lowering emphasizes high-reward samples, while raising it penalizes low-reward ones more heavily.
We first provide a theoretical analysis of this off-policy REINFORCE algorithm, showing that when the baseline lower-bounds the expected reward, the algorithm enjoys a policy improvement guarantee. Our analysis reveals that while on-policy updates can safely leverage both positive and negative signals, off-policy updates benefit from focusing more on positive rewards than on negative ones.
We validate our findings experimentally in a controlled stochastic bandit setting and through fine-tuning state-of-the-art LLMs on reasoning tasks.