2 papers across 2 sessions
Transformer, Mamba, and RWKV language models show consistent patterns of change in behavior over the course of training
This paper presents CoUn, a contrastive learning (CL)-based machine unlearning (MU) framework using only retain data. Further, our proposed CL module can be integrated with existing baselines to empower their performance.