logo
today local_bar
Poster Session 3 · Thursday, December 4, 2025 11:00 AM → 2:00 PM
#3202 Spotlight

Tradeoffs between Mistakes and ERM Oracle Calls in Online and Transductive Online Learning

NeurIPS OpenReview

Abstract

We study online and transductive online learning in settings where the learner can interact with the concept class only via Empirical Risk Minimization (ERM) or weak consistency oracles on arbitrary subsets of the instance domain. This contrasts with standard online models, where the learner has full knowledge of the concept class. The ERM oracle returns a hypothesis that minimizes the loss on a given subset, while the weak consistency oracle returns only a binary signal indicating whether the subset is realizable by a concept in the class. The learner's performance is measured by the number of mistakes and oracle calls.
In the standard online setting with ERM access, we establish tight lower bounds in both the realizable and agnostic cases: mistakes and regret, respectively, where is the number of timesteps and is the Littlestone dimension of the class. We further show how existing results for online learning with ERM access translate to the setting with a weak consistency oracle, at the cost of increasing the number of oracle calls by .
We then consider the transductive online model, where the instance sequence is known in advance but labels are revealed sequentially. For general Littlestone classes, we show that the optimal mistake bound in the realizable case and in the agnostic case can be achieved using weak consistency oracle calls, where is the VC dimension of the class. On the negative side, we show that weak consistency queries are necessary for transductive online learnability, and that ERM queries are necessary to avoid exponential dependence on the Littlestone dimension.