2 papers across 2 sessions
We propose a constrained optimization framework for LLM unlearning that maximizes entropy to forget while preserving utility, achieving stable and effective results across benchmarks.
AIM introduces a new scheme to improve merging performance in LLMs by combining continual learning principles and activation space-based model compression.