1 paper across 1 session
We propose a constrained optimization framework for LLM unlearning that maximizes entropy to forget while preserving utility, achieving stable and effective results across benchmarks.