Assistant Professor, University of Illinois, Urbana Champaign
6 papers at NeurIPS 2025
We identify and address an overlooked challenge in the practical application of data attribution: hyperparameter tuning is tricky due to the costly evaluation metrics.
We present MergeBench, a comprehensive evaluation suite designed to assess model merging at scale.
We cast unlearning as constrained optimization (minimize unlearning subject to bounded utility loss) and propose implicit gradient-surgery that recovers the constrained solution with one backprop, enabling efficient, utility-preserving unlearning.
We propose the first framework of data attribution for online RL.
Propose a scalable gradient compression algorithm for data attribution with sub-linear complexity that achieves competitive attribution results.