2 papers across 2 sessions
We propose a framework called Guard to enhance the robustness of a broad class of 1-consistent learning-augmented caching algorithms, while preserving 1-consistency and incurring low additional computational overhead.