Poster Session 2 · Wednesday, December 3, 2025 4:30 PM → 7:30 PM
#5517
Fast exact recovery of noisy matrix from few entries: the infinity norm approach
Abstract
The matrix recovery (completion) problem, a central problem in data science, involves recovering a matrix from a relatively small random set of entries. While such a task is generally impossible, it has been shown that one can recover exactly in polynomial time, with high probability, under three basic and necessary assumptions:
- the rank of is very small compared to its dimensions (low rank),
- has delocalized singular vectors (incoherence), and
- the sample size is sufficiently large.
In applications, Candes and Plan (2009) noted that it is more realistic to assume noisy observations. In such cases, the above approaches provide approximate recovery with small root mean square error, which is difficult to convert into exact recovery. Recently, results by Abbe et al. (2017) and Bhardwaj et al. (2023) on approximation in the infinity norm showed that one can recover even in the noisy case, provided has bounded precision.
However, beyond the three basic assumptions, they either required that the condition number of be small (2017) or that the gaps between consecutive singular values be large (2023). These additional assumptions conflict, with one requiring singular values to be close together and the other suggesting they should be far apart. It is thus natural to conjecture that neither is necessary. In this paper, we demonstrate that this is indeed the case. We propose a simple algorithm for exact recovery of noisy data, relying solely on the three basic assumptions. The core step of the algorithm is a straightforward truncated singular value decomposition, which is highly efficient.
To analyze the algorithm, we prove a new infinity norm version of the classical Davis-Kahan perturbation theorem, improving an earlier result in (2023). Our proof employs a combinatorial contour integration argument and is entirely distinct from all previous approaches.