Full Professor, Yale University
5 papers at NeurIPS 2025
We propose a new error metric for constructing coresets in \$(k,z)\$-clustering with noisy data, leading to smaller coresets, stronger theoretical guarantees, and improved empirical performance compared to classical methods.
We derive sharp spectral-norm bounds for low-rank inverse approximations of noisy matrices, improving classical estimates by up to \sqrt{n} and offering spectrum-aware robustness guarantees validated on real and synthetic data.
We show how perceived post-selection bias distorts strategic effort in merit-based selection, leading to disparities. Our model quantifies interventions to reduce inequity by adjusting selectivity and perceived valuation gaps.
We model stable matchings under group-dependent bias and correlated evaluations, characterize equilibrium thresholds, and show how evaluator alignment amplifies or mitigates fairness loss in decentralized systems.
We derive sharp spectral-norm bounds for noisy low-rank approximation, improving prior results by up to $\sqrt{n}$. Applied to DP-PCA, our method resolves an open problem and matches empirical error via a novel contour bootstrapping technique.