logo
today local_bar
Poster Session 5 · Friday, December 5, 2025 11:00 AM → 2:00 PM
#3306

The Complexity of Finding Local Optima in Contrastive Learning

NeurIPS OpenReview

Abstract

Contrastive learning is a powerful technique for discovering meaningful data representations by optimizing objectives based on , often given as a set of weighted triplets indicating that an "anchor" is more similar to a "positive" example than to a "negative" example .
The goal is to find representations (e.g., embeddings in or a tree metric) where anchors are placed closer to positive than to negative examples. While finding optima of contrastive objectives is -hard, the complexity of finding optima---representations that do not improve by local search algorithms such as gradient-based methods---remains open.
Our work settles the complexity of finding local optima in various contrastive learning problems by proving -hardness in discrete settings (e.g., maximize satisfied triplets) and -hardness in continuous settings (e.g., minimize Triplet Loss), where (Polynomial Local Search) and (Continuous Local Search) are well-studied complexity classes capturing local search dynamics in discrete and continuous optimization, respectively.
Our results imply that no polynomial time algorithm (local search or otherwise) can find a local optimum for various contrastive learning problems, unless (or for continuous problems). Even in the unlikely scenario that (or ), our reductions imply that there exist instances where local search algorithms need exponential time to reach a local optimum, even for (embeddings on a line).