Associate Professor, Purdue University
3 papers at NeurIPS 2025
We introduce the first practical benchmark for FL with DP in ASR, combining theoretical insights on gradient heterogeneity with empirical results that demonstrate scalability and strong performance under user-level privacy guarantees.
A reinforcement-learning post-training framework teaches LLM assistants to reason about contextual integrity, slashing inappropriate information disclosure while helping users complete their tasks.
To adapt ML models to concept drift under strict resource constraints, we propose a lightweight drift-plus-penalty policy that provably limits resource usage and achieves robust results.