1 paper across 1 session
We propose an attention-shifting-based unlearning method for large language models, which enables precise removal of target knowledge while preserving utility and mitigating hallucinations.