3 papers across 2 sessions
Product distributions on n dimensions can be learned with sublinear samples if a sufficiently close distribution is provided as advice.
We provide confidence intervals, prediction intervals, and hypothesis tests for variable importance for boosting with dropout and parallel training.
We introduce Backward Conformal Prediction, a new method that adapts coverage levels to enforce interpretable, data-dependent prediction set sizes, with provable guarantees and practical estimators.