PhD student, Korea Advanced Institute of Science & Technology
1 paper at NeurIPS 2025
We propose a distillation framework for training language model anonymizers capable of effective anonymization via iterative self-refinement