Undergrad student, University of Illinois at Urbana-Champaign
1 paper at NeurIPS 2025
We propose ADRPO, a method that dynamically adjusts divergence regularization strength based on advantage estimates, enabling more effective fine-tuning of generative models by automatically balancing exploration and exploitation at the sample level.