4 papers across 3 sessions
A method to scale second-order training for PINNs, based on domain decomposition and adversarial adaptive sampling.
This paper proposes an approach for split conformal prediction with unsupervised calibration samples. Theoretical and experimental results show that the presented methods can achieve performance comparable to that with supervised calibration
This paper presents methods for robust minimax boosting (RMBoost) that minimize worst-case error probabilities, are robust to general types of label noise, and provide finite-sample performance guarantees with label noise
We theoretically characterize MoM's optimality for different classes of distributions under adversarial contamination