6 papers across 3 sessions
We study greedy task orderings in continual learning that maximize dissimilarity between consecutive tasks, and compare their performance to random orderings both analytically and empirically.
We propose DEAL, a continual low-rank fine-tuning framework that enables efficient and privacy-preserving adaptation of large language models.
We prove that using regularization with either fixed or increasing strength yields near-optimal and optimal worst-case expected loss rates in realizable continual regression under random task orderings.