logo
today local_bar
Poster Session 5 · Friday, December 5, 2025 11:00 AM → 2:00 PM
#802

Robustifying Learning-Augmented Caching Efficiently without Compromising 1-Consistency

NeurIPS Project Page Slides OpenReview

Abstract

The online caching problem aims to minimize cache misses when serving a sequence of requests under a limited cache size. While naive learning-augmented caching algorithms achieve ideal -consistency, they lack robustness guarantees. Existing robustification methods either sacrifice -consistency or introduce excessive computational overhead.
In this paper, we introduce Guard, a lightweight robustification framework that enhances the robustness of a broad class of learning-augmented caching algorithms to , while preserving their -consistency. Guard achieves the current best-known trade-off between consistency and robustness, with only additional per-request overhead, thereby maintaining the original time complexity of the base algorithm.
Extensive experiments across multiple real-world datasets and prediction models validate the effectiveness of Guard in practice.