Poster Session 1 · Wednesday, December 3, 2025 11:00 AM → 2:00 PM
#3006
High-Dimensional Calibration from Swap Regret
Abstract
We study the online calibration of multi-dimensional forecasts over an arbitrary convex set relative to an arbitrary norm . We connect this with the problem of external regret minimization for online linear optimization, showing that if it is possible to guarantee worst-case regret after rounds when actions are drawn from and losses are drawn from the dual unit norm ball, then it is also possible to obtain -calibrated forecasts after rounds. When is the -dimensional simplex and is the -norm, the existence of algorithms for learning with experts implies that it is possible to obtain -calibrated forecasts after rounds, recovering a recent result of Peng 2025.
Interestingly, our algorithm obtains this guarantee without requiring access to any online linear optimization subroutine or knowledge of the optimal rate -- in fact, our algorithm is identical for every setting of and . Instead, we show that the optimal regularizer for the above OLO problem can be used to upper bound the above calibration error by a swap regret, which we then minimize by running the recent TreeSwap algorithm with Follow-The-Leader as a subroutine. The resulting algorithm is highly efficient and plays a distribution over simple averages of past observations in each round.
Finally, we prove that any online calibration algorithm that guarantees -calibration error over the -dimensional simplex requires (assuming ). This strengthens the corresponding lower bound of Peng 2025, and shows that an exponential dependence on is necessary.