1 paper across 1 session
We propose a distillation framework for training language model anonymizers capable of effective anonymization via iterative self-refinement