Postdoc, University of Illinois at Chicago
2 papers at NeurIPS 2025
We show that in stochastic convex optimization, any algorithm achieving error smaller than the best possible under differential privacy is traceable, with the number of traceable samples matching the statistical sample complexity of learning.
We study tradeoffs between mistakes and oracle calls in online and transductive online learning, proving tight mistake bounds for the online setting and showing that transductive learning admits improved bounds, with further gains from randomization.