Assistant Professor, University of Illinois at Urbana-Champaign
3 papers at NeurIPS 2025
Variational supervised contrastive learning maximizes a posterior-weighted ELBO, replacing pairwise comparisons with class-level interactions for SOTA performance on image classification tasks.
We propose ADRPO, a method that dynamically adjusts divergence regularization strength based on advantage estimates, enabling more effective fine-tuning of generative models by automatically balancing exploration and exploitation at the sample level.
We propose Riemannian Consistency Model (RCM) as an extension of the original consistency model for non-Euclidean domains.