Adjunct Professor, McGill University
2 papers at NeurIPS 2025
We highlight the susceptibility of existing unlearning methods to relearning attacks and analyze the characteristics of robust methods by leveraging the weight-space perspective.
We show that in stochastic convex optimization, any algorithm achieving error smaller than the best possible under differential privacy is traceable, with the number of traceable samples matching the statistical sample complexity of learning.