6 papers across 3 sessions
We propose a dual-view graph learning framework that detects label noise by capturing semantic discrepancies between node-level and structure-level predictions.
Label noise in CBMs cripples prediction performance, interpretability, and interventions via a few susceptible concepts. We combat this with sharpness-aware training and entropy-based concept correction, restoring the robustness of CBMs.
We develop a principled framework using approximate message passing (AMP) to analyze iterative self-retraining of ML models and derive the optimal way to combine the given labels with model predictions at each retraining round.
This paper presents methods for robust minimax boosting (RMBoost) that minimize worst-case error probabilities, are robust to general types of label noise, and provide finite-sample performance guarantees with label noise
We characterize the problem of achieving group sufficiency under label bias, and introduce a regularizer that restores fairness without sacrificing accuracy.