PhD student, Zhejiang University
1 paper at NeurIPS 2025
We propose a framework called Guard to enhance the robustness of a broad class of 1-consistent learning-augmented caching algorithms, while preserving 1-consistency and incurring low additional computational overhead.