logo
today local_bar
Poster Session 6 · Friday, December 5, 2025 4:30 PM → 7:30 PM
#3316

Improved Regret Bounds for Gaussian Process Upper Confidence Bound in Bayesian Optimization

NeurIPS OpenReview Oral Oral 6E Theory 4

Abstract

This paper addresses the Bayesian optimization problem (also referred to as the Bayesian setting of the Gaussian process bandit), where the learner seeks to minimize the regret under a function drawn from a known Gaussian process (GP).
Under a Matérn kernel with some extent of smoothness, we show that the Gaussian process upper confidence bound (GP-UCB) algorithm achieves cumulative regret with high probability. Furthermore, our analysis yields regret under a squared exponential kernel.
These results fill the gap between the existing regret upper bound of GP-UCB and the current best upper bound provided by Scarlett 2018.
The key idea in our proof is to capture the concentration behavior of the input sequence realized by GP-UCB, enabling us to handle GP's information gain in a refined manner.