logo
today local_bar
Poster Session 1 · Wednesday, December 3, 2025 11:00 AM → 2:00 PM
#2901 Spotlight

The Power of Iterative Filtering for Supervised Learning with (Heavy) Contamination

NeurIPS OpenReview

Abstract

Inspired by recent work on learning with distribution shift, we give ageneral outlier removal algorithm called iterative polynomialfiltering and show a number of striking applications for supervisedlearning with contamination:
  1. We show that any function class that can be approximated bylow-degree polynomials with respect to a hypercontractive distributioncan be efficiently learned under bounded contamination (alsoknown as nasty noise). This is a surprising resolution to alongstanding gap between the complexity of agnostic learning andlearning with contamination, as it was widely believed that low-degreeapproximators only implied tolerance to label noise.
  2. For any function class that admits the (stronger) notion ofsandwiching approximators, we obtain near-optimal learning guaranteeseven with respect to heavy additive contamination, where far more than of the training set may be added adversarially. Priorrelated work held only for regression and in a list-decodable setting.
  3. We obtain the first efficient algorithms for tolerant testablelearning of functions of halfspaces with respect to any fixedlog-concave distribution. Even the non-tolerant case for a singlehalfspace in this setting had remained open.
These results significantly advance our understanding of efficientsupervised learning under contamination, a setting that has been muchless studied than its unsupervised counterpart.