PhD student, Massachusetts Institute of Technology
1 paper at NeurIPS 2025
We show that in stochastic convex optimization, any algorithm achieving error smaller than the best possible under differential privacy is traceable, with the number of traceable samples matching the statistical sample complexity of learning.