PhD student, Carnegie Mellon University
1 paper at NeurIPS 2025
We show that even exact unlearning, the gold standard for data removal in large language models, can leak sensitive information by using guidance between pre- and post-unlearning checkpoints.